* patch has been queued to stable release 23.11.3
@ 2024-11-11 6:26 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
` (120 more replies)
0 siblings, 121 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Xueming Li; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0278b12d3741c9375edeced034a111c1e551bfba
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0278b12d3741c9375edeced034a111c1e551bfba Mon Sep 17 00:00:00 2001
From: Xueming Li <xuemingl@nvidia.com>
Date: Mon, 11 Nov 2024 14:23:04 +0800
Subject: [PATCH] 23.11.3-rc1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
Aleksandr Loktionov (1):
net/i40e/base: fix misleading debug logs and comments
Anatoly Burakov (2):
net/i40e/base: fix setting flags in init function
net/i40e/base: add missing X710TL device check
Artur Tyminski (1):
net/i40e/base: fix DDP loading with reserved track ID
Barbara Skobiej (3):
net/ixgbe/base: fix unchecked return value
net/i40e/base: fix unchecked return value
net/i40e/base: fix loop bounds
Bill Xiang (2):
vhost: fix offset while mapping log base address
vdpa: update used flags in used ring relay
Bing Zhao (1):
net/mlx5: workaround list management of Rx queue control
Bruce Richardson (7):
eal/x86: fix 32-bit write combining store
net/iavf: delay VF reset command
net/i40e: fix AVX-512 pointer copy on 32-bit
net/ice: fix AVX-512 pointer copy on 32-bit
net/iavf: fix AVX-512 pointer copy on 32-bit
common/idpf: fix AVX-512 pointer copy on 32-bit
build: remove version check on compiler links function
Chaoyong He (2):
net/nfp: fix link change return value
net/nfp: fix pause frame setting check
Chengwen Feng (4):
examples/eventdev: fix queue crash with generic pipeline
ethdev: verify queue ID in Tx done cleanup
net/hns3: verify reset type from firmware
dmadev: fix potential null pointer access
Dave Ertman (1):
net/ice/base: fix VLAN replay after reset
David Marchand (3):
log: add a per line log helper
drivers: remove redundant newline from logs
net/iavf: preserve MAC address with i40e PF Linux driver
Eryk Rybak (1):
net/i40e/base: fix blinking X722 with X557 PHY
Fabio Pricoco (2):
net/ice/base: fix iteration of TLVs in Preserved Fields Area
net/ice/base: add bounds check
Gagandeep Singh (2):
crypto/dpaa2_sec: fix memory leak
bus/dpaa: fix PFDRs leaks due to FQRNIs
Hemant Agrawal (2):
bus/dpaa: fix VSP for 1G fm1-mac9 and 10
bus/dpaa: fix the fman details status
Hernan Vargas (2):
baseband/acc: fix access to deallocated mem
baseband/acc: fix soft output bypass RM
Jie Hai (2):
net/hns3: remove some basic address dump
net/hns3: fix dump counter of registers
Joshua Washington (5):
net/gve: fix mbuf allocation memory leak for DQ Rx
net/gve: always attempt Rx refill on DQ
net/gve: fix refill logic causing memory corruption
net/gve: add IO memory barriers before reading descriptors
net/gve/base: fix build with Fedora Rawhide
Julien Hascoet (1):
crypto/scheduler: fix session size computation
Jun Wang (1):
net/e1000: fix link status crash in secondary process
Kaiwen Deng (1):
net/iavf: fix crash when link is unstable
Kommula Shiva Shankar (1):
net/virtio-user: reset used index counter
Malcolm Bumgardner (1):
dev: fix callback lookup when unregistering device
Maxime Coquelin (1):
vhost: restrict set max queue pair API to VDUSE
Mihai Brodschi (1):
net/memif: fix buffer overflow in zero copy Rx
Mingjin Ye (1):
bus/vdev: revert fix devargs in secondary process
Niall Meade (1):
ethdev: fix overflow in descriptor count
Nithin Dabilpuram (2):
common/cnxk: fix inline CTX write
common/cnxk: fix CPT HW word size for outbound SA
Oleksandr Nahnybida (1):
pcapng: fix handling of chained mbufs
Paul Greenwalt (1):
net/ice/base: fix link speed for 200G
Pavan Nikhilesh (3):
test/event: fix schedule type
test/event: fix target event queue
common/cnxk: fix IRQ reconfiguration
Praveen Shetty (2):
net/cpfl: add checks for flow action types
net/cpfl: fix parsing protocol ID mask field
Qin Ke (2):
net/nfp: fix type declaration of some variables
net/nfp: fix representor port link status update
Radoslaw Tyl (1):
net/i40e/base: fix repeated register dumps
Rakesh Kudurumalla (6):
net/cnxk: fix Rx timestamp handling for VF
net/cnxk: fix Rx offloads to handle timestamp
event/cnxk: fix Rx timestamp handling
net/cnxk: fix OOP handling for inbound packets
event/cnxk: fix OOP handling in event mode
common/cnxk: fix base log level
Rohit Raj (1):
net/dpaa: fix typecasting channel ID
Shihong Wang (1):
net/nfp: do not set IPv6 flag in transport mode
Shreesh Adiga (1):
net/mana: support rdma-core via pkg-config
Sivaprasad Tummala (1):
power: fix mapped lcore ID
Srikanth Yalavarthi (1):
ml/cnxk: fix handling of TVM model I/O
Stephen Hemminger (22):
bpf: fix free function mismatch if convert fails
baseband/la12xx: fix use after free in modem config
common/qat: fix use after free in device probe
common/idpf: fix use after free in mailbox init
crypto/bcmfs: fix free function mismatch
dma/idxd: fix free function mismatch in device probe
event/cnxk: fix free function mismatch in port config
net/cnxk: fix use after free in mempool create
net/cpfl: fix invalid free in JSON parser
net/e1000: fix use after free in filter flush
net/nfp: fix double free in flow destroy
net/sfc: fix use after free in debug logs
raw/ifpga/base: fix use after free
raw/ifpga: fix free function mismatch in interrupt config
examples/vhost: fix free function mismatch
app/dumpcap: fix handling of jumbo frames
net/tap: avoid memcpy with null argument
app/testpmd: remove unnecessary cast
net/pcap: set live interface as non-blocking
net/ena: revert redefining memcpy
net/tap: restrict maximum number of MP FDs
net/pcap: fix blocking Rx
Sunil Kumar Kori (1):
common/cnxk: fix MAC address change with active VF
Tathagat Priyadarshi (2):
net/gve: fix queue setup and stop
net/gve: fix Tx for chained mbuf
Tejasree Kondoj (1):
examples/ipsec-secgw: fix dequeue count from cryptodev
Thomas Monjalon (1):
net/nfb: fix use after free
Timothy Redaelli (1):
net/ionic: fix build with Fedora Rawhide
Vanshika Shukla (1):
net/dpaa: fix reallocate mbuf handling
Varun Sethi (1):
common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog
Viacheslav Ovsiienko (8):
net/mlx5/hws: fix flex item as tunnel header
net/mlx5: add flex item query for tunnel mode
net/mlx5: fix flex item tunnel mode
net/mlx5: fix number of supported flex parsers
app/testpmd: remove flex item init command leftover
net/mlx5: fix next protocol validation after flex item
net/mlx5: fix non full word sample fields in flex item
net/mlx5: fix flex item header length field translation
Vladimir Medvedkin (3):
fib6: add runtime checks in AVX512 lookup
fib: fix AVX512 lookup
hash: fix thash LFSR initialization
Wathsala Vithanage (1):
power: enable CPPC
Xinying Yu (2):
vdpa/nfp: fix hardware initialization
vdpa/nfp: fix reconfiguration
Xueming Li (1):
23.11.3-rc1
Zerun Fu (1):
net/nfp: notify flower firmware about PF speed
.mailmap | 11 +
app/dumpcap/main.c | 15 +-
app/test-pmd/cmdline.c | 456 +++++++++---------
app/test-pmd/cmdline_flow.c | 12 -
app/test/test_event_dma_adapter.c | 5 +-
app/test/test_eventdev.c | 1 +
app/test/test_pcapng.c | 12 +-
app/test/test_power_cpufreq.c | 21 +-
devtools/checkpatches.sh | 8 +
drivers/baseband/acc/rte_acc100_pmd.c | 58 +--
drivers/baseband/acc/rte_vrb_pmd.c | 75 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 14 +-
drivers/baseband/la12xx/bbdev_la12xx.c | 5 +-
.../baseband/turbo_sw/bbdev_turbo_software.c | 4 +-
drivers/bus/cdx/cdx_vfio.c | 8 +-
drivers/bus/dpaa/base/fman/fman.c | 29 +-
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +-
drivers/bus/dpaa/base/qbman/qman.c | 46 +-
drivers/bus/dpaa/include/fman.h | 3 +-
drivers/bus/fslmc/fslmc_bus.c | 8 +-
drivers/bus/fslmc/fslmc_vfio.c | 10 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 4 +-
drivers/bus/ifpga/ifpga_bus.c | 8 +-
drivers/bus/vdev/vdev.c | 21 +-
drivers/bus/vdev/vdev_params.c | 2 +-
drivers/bus/vmbus/vmbus_common.c | 2 +-
drivers/common/cnxk/roc_dev.c | 18 +-
drivers/common/cnxk/roc_dev_priv.h | 2 +
drivers/common/cnxk/roc_ie_ot.c | 1 +
drivers/common/cnxk/roc_model.c | 2 +-
drivers/common/cnxk/roc_nix_inl.c | 8 +
drivers/common/cnxk/roc_nix_mac.c | 10 -
drivers/common/cnxk/roc_nix_ops.c | 20 +-
drivers/common/cnxk/roc_nix_tm.c | 2 +-
drivers/common/cnxk/roc_nix_tm_mark.c | 2 +-
drivers/common/cnxk/roc_nix_tm_ops.c | 2 +-
drivers/common/cnxk/roc_nix_tm_utils.c | 2 +-
drivers/common/cnxk/roc_platform.c | 2 +-
drivers/common/cnxk/roc_sso.c | 9 +-
drivers/common/cnxk/roc_tim.c | 2 +-
drivers/common/cpt/cpt_ucode.h | 4 +-
drivers/common/dpaax/caamflib/desc/pdcp.h | 10 +
drivers/common/iavf/iavf_prototype.h | 1 +
drivers/common/iavf/version.map | 1 +
drivers/common/idpf/base/idpf_osdep.h | 10 +-
drivers/common/idpf/idpf_common_device.c | 3 +-
drivers/common/idpf/idpf_common_logs.h | 5 +-
drivers/common/idpf/idpf_common_rxtx_avx512.c | 7 +
drivers/common/nfp/nfp_common_ctrl.h | 1 +
drivers/common/octeontx/octeontx_mbox.c | 4 +-
drivers/common/qat/meson.build | 2 +-
drivers/common/qat/qat_device.c | 6 +-
drivers/common/qat/qat_pf2vf.c | 4 +-
drivers/common/qat/qat_qp.c | 2 +-
drivers/compress/isal/isal_compress_pmd.c | 78 +--
drivers/compress/octeontx/otx_zip.h | 12 +-
drivers/compress/octeontx/otx_zip_pmd.c | 14 +-
drivers/compress/zlib/zlib_pmd.c | 26 +-
drivers/compress/zlib/zlib_pmd_ops.c | 4 +-
drivers/crypto/bcmfs/bcmfs_device.c | 4 +-
drivers/crypto/bcmfs/bcmfs_qp.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_session.c | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 32 +-
drivers/crypto/caam_jr/caam_jr_uio.c | 6 +-
drivers/crypto/ccp/ccp_dev.c | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cnxk_se.h | 6 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 43 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 16 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 24 +-
drivers/crypto/dpaa_sec/dpaa_sec_log.h | 2 +-
drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 6 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.c | 4 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 2 +-
drivers/crypto/ipsec_mb/meson.build | 2 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 28 +-
drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 +-
.../crypto/octeontx/otx_cryptodev_hw_access.h | 6 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 42 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 44 +-
drivers/crypto/qat/qat_asym.c | 2 +-
drivers/crypto/qat/qat_sym_session.c | 12 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/uadk/uadk_crypto_pmd.c | 8 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/dma/dpaa/dpaa_qdma.c | 40 +-
drivers/dma/dpaa2/dpaa2_qdma.c | 10 +-
drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
drivers/dma/idxd/idxd_common.c | 2 +-
drivers/dma/idxd/idxd_pci.c | 8 +-
drivers/dma/ioat/ioat_dmadev.c | 14 +-
drivers/event/cnxk/cn10k_eventdev.c | 46 ++
drivers/event/cnxk/cn9k_eventdev.c | 31 ++
drivers/event/cnxk/cnxk_eventdev.c | 2 +-
drivers/event/cnxk/cnxk_eventdev_adptr.c | 2 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 2 +-
drivers/event/dlb2/dlb2.c | 220 ++++-----
drivers/event/dlb2/dlb2_xstats.c | 6 +-
drivers/event/dlb2/pf/dlb2_main.c | 52 +-
drivers/event/dlb2/pf/dlb2_pf.c | 20 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 14 +-
drivers/event/octeontx/timvf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 30 +-
drivers/event/opdl/opdl_test.c | 116 ++---
drivers/event/sw/sw_evdev.c | 22 +-
drivers/event/sw/sw_evdev_xstats.c | 4 +-
drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 8 +-
drivers/mempool/octeontx/octeontx_fpavf.c | 22 +-
.../mempool/octeontx/rte_mempool_octeontx.c | 6 +-
drivers/ml/cnxk/cn10k_ml_dev.c | 32 +-
drivers/ml/cnxk/cnxk_ml_ops.c | 32 +-
drivers/ml/cnxk/mvtvm_ml_model.c | 2 +-
drivers/net/atlantic/atl_rxtx.c | 4 +-
drivers/net/atlantic/hw_atl/hw_atl_utils.c | 12 +-
drivers/net/axgbe/axgbe_ethdev.c | 2 +-
drivers/net/bnx2x/bnx2x.c | 8 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 4 +-
drivers/net/bonding/rte_eth_bond_alb.c | 2 +-
drivers/net/bonding/rte_eth_bond_api.c | 4 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 6 +-
drivers/net/cnxk/cn10k_ethdev.c | 18 +-
drivers/net/cnxk/cn10k_ethdev_sec.c | 10 +
drivers/net/cnxk/cn9k_ethdev.c | 17 +-
drivers/net/cnxk/cnxk_ethdev.c | 6 +-
drivers/net/cnxk/cnxk_ethdev.h | 11 +
drivers/net/cnxk/cnxk_ethdev_mcs.c | 14 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 2 +-
drivers/net/cnxk/cnxk_ethdev_sec.c | 2 +-
drivers/net/cnxk/version.map | 1 +
drivers/net/cpfl/cpfl_flow_engine_fxp.c | 11 +
drivers/net/cpfl/cpfl_flow_parser.c | 37 +-
drivers/net/cpfl/cpfl_fxp_rule.c | 8 +-
drivers/net/dpaa/dpaa_ethdev.c | 6 +-
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 16 +-
drivers/net/dpaa2/dpaa2_flow.c | 36 +-
drivers/net/dpaa2/dpaa2_mux.c | 4 +-
drivers/net/dpaa2/dpaa2_recycle.c | 6 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 14 +-
drivers/net/dpaa2/dpaa2_sparser.c | 8 +-
drivers/net/dpaa2/dpaa2_tm.c | 24 +-
drivers/net/e1000/em_ethdev.c | 3 +
drivers/net/e1000/igb_ethdev.c | 6 +-
drivers/net/ena/base/ena_plat_dpdk.h | 10 +-
drivers/net/enetc/enetc_ethdev.c | 4 +-
drivers/net/enetfec/enet_ethdev.c | 4 +-
drivers/net/enetfec/enet_uio.c | 10 +-
drivers/net/enic/enic_ethdev.c | 20 +-
drivers/net/enic/enic_flow.c | 20 +-
drivers/net/enic/enic_vf_representor.c | 16 +-
drivers/net/failsafe/failsafe_args.c | 2 +-
drivers/net/failsafe/failsafe_eal.c | 2 +-
drivers/net/failsafe/failsafe_ether.c | 4 +-
drivers/net/failsafe/failsafe_intr.c | 6 +-
drivers/net/gve/base/gve_adminq.c | 2 +-
drivers/net/gve/base/gve_osdep.h | 48 +-
drivers/net/gve/gve_ethdev.c | 29 +-
drivers/net/gve/gve_ethdev.h | 2 +
drivers/net/gve/gve_rx_dqo.c | 86 ++--
drivers/net/gve/gve_tx_dqo.c | 11 +-
drivers/net/hinic/base/hinic_pmd_eqs.c | 2 +-
drivers/net/hinic/base/hinic_pmd_mbox.c | 6 +-
drivers/net/hinic/base/hinic_pmd_niccfg.c | 8 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 4 +-
drivers/net/hns3/hns3_dump.c | 12 +-
drivers/net/hns3/hns3_intr.c | 18 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
drivers/net/hns3/hns3_regs.c | 18 +-
drivers/net/i40e/base/i40e_adminq.c | 19 +-
drivers/net/i40e/base/i40e_common.c | 42 +-
drivers/net/i40e/base/i40e_devids.h | 3 +-
drivers/net/i40e/base/i40e_diag.c | 12 +-
drivers/net/i40e/base/i40e_nvm.c | 16 +-
drivers/net/i40e/i40e_ethdev.c | 37 +-
drivers/net/i40e/i40e_pf.c | 8 +-
drivers/net/i40e/i40e_rxtx.c | 24 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 7 +
drivers/net/iavf/iavf_ethdev.c | 46 +-
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 7 +
drivers/net/iavf/iavf_vchnl.c | 5 +-
drivers/net/ice/base/ice_adminq_cmd.h | 2 +-
drivers/net/ice/base/ice_controlq.c | 23 +-
drivers/net/ice/base/ice_nvm.c | 36 +-
drivers/net/ice/base/ice_switch.c | 2 -
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_dcf_vf_representor.c | 14 +-
drivers/net/ice/ice_ethdev.c | 44 +-
drivers/net/ice/ice_fdir_filter.c | 2 +-
drivers/net/ice/ice_hash.c | 8 +-
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 7 +
drivers/net/ionic/ionic_osdep.h | 30 +-
drivers/net/ipn3ke/ipn3ke_ethdev.c | 4 +-
drivers/net/ipn3ke/ipn3ke_flow.c | 23 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 20 +-
drivers/net/ipn3ke/ipn3ke_tm.c | 10 +-
drivers/net/ixgbe/base/ixgbe_82599.c | 8 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 7 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 24 +-
drivers/net/ixgbe/ixgbe_pf.c | 18 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 8 +-
drivers/net/mana/meson.build | 4 +-
drivers/net/memif/rte_eth_memif.c | 12 +-
drivers/net/mlx4/mlx4.c | 4 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 17 +-
drivers/net/mlx5/mlx5.h | 9 +-
drivers/net/mlx5/mlx5_flow_dv.c | 7 +-
drivers/net/mlx5/mlx5_flow_flex.c | 194 +++++---
drivers/net/mlx5/mlx5_flow_hw.c | 8 +
drivers/net/mlx5/mlx5_rx.h | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfb/nfb_tx.c | 2 +-
.../net/nfp/flower/nfp_flower_representor.c | 6 +-
drivers/net/nfp/nfp_ethdev.c | 18 +-
drivers/net/nfp/nfp_flow.c | 1 -
drivers/net/nfp/nfp_ipsec.c | 15 +-
drivers/net/nfp/nfp_net_common.c | 10 +-
drivers/net/nfp/nfp_net_common.h | 2 +
drivers/net/ngbe/base/ngbe_hw.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 2 +-
drivers/net/ngbe/ngbe_pf.c | 10 +-
drivers/net/octeon_ep/cnxk_ep_tx.c | 2 +-
drivers/net/octeon_ep/cnxk_ep_vf.c | 12 +-
drivers/net/octeon_ep/otx2_ep_vf.c | 18 +-
drivers/net/octeon_ep/otx_ep_common.h | 2 +-
drivers/net/octeon_ep/otx_ep_ethdev.c | 80 +--
drivers/net/octeon_ep/otx_ep_mbox.c | 30 +-
drivers/net/octeon_ep/otx_ep_rxtx.c | 74 +--
drivers/net/octeon_ep/otx_ep_vf.c | 20 +-
drivers/net/octeontx/base/octeontx_pkovf.c | 2 +-
drivers/net/octeontx/octeontx_ethdev.c | 4 +-
drivers/net/pcap/pcap_ethdev.c | 43 +-
drivers/net/pfe/pfe_ethdev.c | 22 +-
drivers/net/pfe/pfe_hif.c | 12 +-
drivers/net/pfe/pfe_hif_lib.c | 2 +-
drivers/net/qede/qede_rxtx.c | 66 +--
drivers/net/sfc/sfc_flow_rss.c | 4 +-
drivers/net/sfc/sfc_mae.c | 23 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/tap/tap_netlink.c | 3 +-
drivers/net/thunderx/nicvf_ethdev.c | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 4 +-
drivers/net/txgbe/txgbe_ipsec.c | 24 +-
drivers/net/txgbe/txgbe_pf.c | 20 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 2 +-
drivers/net/virtio/virtio_user_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 2 +-
drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c | 14 +-
drivers/raw/ifpga/afu_pmd_n3000.c | 2 +-
drivers/raw/ifpga/base/opae_intel_max10.c | 11 +-
drivers/raw/ifpga/ifpga_rawdev.c | 102 ++--
drivers/regex/cn9k/cn9k_regexdev.c | 2 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 10 +-
drivers/vdpa/nfp/nfp_vdpa.c | 2 +-
drivers/vdpa/nfp/nfp_vdpa_core.c | 25 +-
.../pipeline_worker_generic.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 6 +-
examples/ipsec-secgw/ipsec_process.c | 3 +-
examples/vhost_blk/vhost_blk.c | 2 +-
lib/bpf/bpf_convert.c | 2 +-
lib/dmadev/rte_dmadev.c | 2 +-
lib/eal/common/eal_common_dev.c | 13 +-
lib/eal/x86/include/rte_io.h | 2 +-
lib/ethdev/rte_ethdev.c | 18 +-
lib/fib/dir24_8.c | 4 +-
lib/fib/trie.c | 10 +-
lib/hash/rte_thash.c | 26 +-
lib/log/rte_log.h | 21 +
lib/pcapng/rte_pcapng.c | 12 +-
lib/power/power_acpi_cpufreq.c | 6 +-
lib/power/power_amd_pstate_cpufreq.c | 6 +-
lib/power/power_common.c | 22 +
lib/power/power_common.h | 1 +
lib/power/power_cppc_cpufreq.c | 8 +-
lib/power/power_pstate_cpufreq.c | 6 +-
lib/power/rte_power_pmd_mgmt.c | 11 +-
lib/vhost/rte_vhost.h | 2 +
lib/vhost/socket.c | 11 +
lib/vhost/vdpa.c | 1 +
lib/vhost/vhost_user.c | 2 +-
285 files changed, 2474 insertions(+), 2078 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'bus/vdev: revert fix devargs in secondary process' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
` (119 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Mingjin Ye; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1d5ac7180a8b48033600cd7006a7da1a95991c1f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1d5ac7180a8b48033600cd7006a7da1a95991c1f Mon Sep 17 00:00:00 2001
From: Mingjin Ye <mingjinx.ye@intel.com>
Date: Thu, 14 Mar 2024 09:36:28 +0000
Subject: [PATCH] bus/vdev: revert fix devargs in secondary process
Cc: Xueming Li <xuemingl@nvidia.com>
The ASan tool detected a memory leak in the vdev driver
alloc_devargs. The previous commit was that when inserting
a vdev device, the primary process alloc devargs and the
secondary process looks for devargs. This causes the
device to not be created if the secondary process does
not initialise the vdev device. And, this is not the
root cause.
Therefore the following commit was reverted accordingly.
After restoring this commit, the memory leak still exists.
Bugzilla ID: 1450
Fixes: 6666628362c9 ("bus/vdev: fix devargs in secondary process")
Cc: stable@dpdk.org
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
drivers/bus/vdev/vdev.c | 21 +--------------------
1 file changed, 1 insertion(+), 20 deletions(-)
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index dcedd0d4a0..ec7abe7cda 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -263,22 +263,6 @@ alloc_devargs(const char *name, const char *args)
return devargs;
}
-static struct rte_devargs *
-vdev_devargs_lookup(const char *name)
-{
- struct rte_devargs *devargs;
- char dev_name[32];
-
- RTE_EAL_DEVARGS_FOREACH("vdev", devargs) {
- devargs->bus->parse(devargs->name, &dev_name);
- if (strcmp(dev_name, name) == 0) {
- VDEV_LOG(INFO, "devargs matched %s", dev_name);
- return devargs;
- }
- }
- return NULL;
-}
-
static int
insert_vdev(const char *name, const char *args,
struct rte_vdev_device **p_dev,
@@ -291,10 +275,7 @@ insert_vdev(const char *name, const char *args,
if (name == NULL)
return -EINVAL;
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- devargs = alloc_devargs(name, args);
- else
- devargs = vdev_devargs_lookup(name);
+ devargs = alloc_devargs(name, args);
if (!devargs)
return -ENOMEM;
--
2.34.1
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-12 9:02 ` David Marchand
2024-11-11 6:26 ` patch 'drivers: remove redundant newline from logs' " Xueming Li
` (118 subsequent siblings)
120 siblings, 1 reply; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: David Marchand; +Cc: xuemingl, Stephen Hemminger, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b6d04ef865b12f884aaf475adc454184cefae753
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b6d04ef865b12f884aaf475adc454184cefae753 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Fri, 17 Nov 2023 14:18:23 +0100
Subject: [PATCH] log: add a per line log helper
Cc: Xueming Li <xuemingl@nvidia.com>
[upstream commit ab550c1d6a0893f00198017a3a0e7cd402a667fd]
gcc builtin __builtin_strchr can be used as a static assertion to check
whether passed format strings contain a \n.
This can be useful to detect double \n in log messages.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
devtools/checkpatches.sh | 8 ++++++++
lib/log/rte_log.h | 21 +++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 10b79ca2bc..10d1bf490b 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -53,6 +53,14 @@ print_usage () {
check_forbidden_additions() { # <patch>
res=0
+ # refrain from new calls to RTE_LOG
+ awk -v FOLDERS="lib" \
+ -v EXPRESSIONS="RTE_LOG\\\(" \
+ -v RET_ON_FAIL=1 \
+ -v MESSAGE='Prefer RTE_LOG_LINE' \
+ -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
+ "$1" || res=1
+
# refrain from new additions of rte_panic() and rte_exit()
# multiple folders and expressions are separated by spaces
awk -v FOLDERS="lib drivers" \
diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h
index f7a8405de9..584cea541e 100644
--- a/lib/log/rte_log.h
+++ b/lib/log/rte_log.h
@@ -17,6 +17,7 @@
extern "C" {
#endif
+#include <assert.h>
#include <stdint.h>
#include <stdio.h>
#include <stdarg.h>
@@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap)
RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \
0)
+#if defined(RTE_TOOLCHAIN_GCC) && !defined(PEDANTIC)
+#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \
+ static_assert(!__builtin_strchr(fmt, '\n'), \
+ "This log format string contains a \\n")
+#else
+#define RTE_LOG_CHECK_NO_NEWLINE(...)
+#endif
+
+#define RTE_LOG_LINE(l, t, ...) do { \
+ RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \
+ RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \
+ RTE_FMT_TAIL(__VA_ARGS__ ,))); \
+} while (0)
+
+#define RTE_LOG_DP_LINE(l, t, ...) do { \
+ RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \
+ RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \
+ RTE_FMT_TAIL(__VA_ARGS__ ,))); \
+} while (0)
+
#define RTE_LOG_REGISTER_IMPL(type, name, level) \
int type; \
RTE_INIT(__##type) \
--
2.34.1
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'drivers: remove redundant newline from logs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
` (117 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: David Marchand; +Cc: xuemingl, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5b424bd34d8c972d428d03bc9952528d597e2040
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5b424bd34d8c972d428d03bc9952528d597e2040 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Wed, 13 Dec 2023 20:29:58 +0100
Subject: [PATCH] drivers: remove redundant newline from logs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f665790a5dbad7b645ff46f31d65e977324e7bfc ]
Fix places where two newline characters may be logged.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/baseband/acc/rte_acc100_pmd.c | 22 +-
drivers/baseband/acc/rte_vrb_pmd.c | 26 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 14 +-
drivers/baseband/la12xx/bbdev_la12xx.c | 4 +-
.../baseband/turbo_sw/bbdev_turbo_software.c | 4 +-
drivers/bus/cdx/cdx_vfio.c | 8 +-
drivers/bus/dpaa/include/fman.h | 3 +-
drivers/bus/fslmc/fslmc_bus.c | 8 +-
drivers/bus/fslmc/fslmc_vfio.c | 10 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 4 +-
drivers/bus/ifpga/ifpga_bus.c | 8 +-
drivers/bus/vdev/vdev_params.c | 2 +-
drivers/bus/vmbus/vmbus_common.c | 2 +-
drivers/common/cnxk/roc_dev.c | 2 +-
drivers/common/cnxk/roc_model.c | 2 +-
drivers/common/cnxk/roc_nix_ops.c | 20 +-
drivers/common/cnxk/roc_nix_tm.c | 2 +-
drivers/common/cnxk/roc_nix_tm_mark.c | 2 +-
drivers/common/cnxk/roc_nix_tm_ops.c | 2 +-
drivers/common/cnxk/roc_nix_tm_utils.c | 2 +-
drivers/common/cnxk/roc_sso.c | 2 +-
drivers/common/cnxk/roc_tim.c | 2 +-
drivers/common/cpt/cpt_ucode.h | 4 +-
drivers/common/idpf/idpf_common_logs.h | 5 +-
drivers/common/octeontx/octeontx_mbox.c | 4 +-
drivers/common/qat/qat_pf2vf.c | 4 +-
drivers/common/qat/qat_qp.c | 2 +-
drivers/compress/isal/isal_compress_pmd.c | 78 +++----
drivers/compress/octeontx/otx_zip.h | 12 +-
drivers/compress/octeontx/otx_zip_pmd.c | 14 +-
drivers/compress/zlib/zlib_pmd.c | 26 +--
drivers/compress/zlib/zlib_pmd_ops.c | 4 +-
drivers/crypto/bcmfs/bcmfs_qp.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_session.c | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 32 +--
drivers/crypto/caam_jr/caam_jr_uio.c | 6 +-
drivers/crypto/ccp/ccp_dev.c | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cnxk_se.h | 6 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 42 ++--
drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 16 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 24 +-
drivers/crypto/dpaa_sec/dpaa_sec_log.h | 2 +-
drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 6 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.c | 4 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 2 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 28 +--
drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 +-
.../crypto/octeontx/otx_cryptodev_hw_access.h | 6 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 42 ++--
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 44 ++--
drivers/crypto/qat/qat_asym.c | 2 +-
drivers/crypto/qat/qat_sym_session.c | 12 +-
drivers/crypto/uadk/uadk_crypto_pmd.c | 8 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/dma/dpaa/dpaa_qdma.c | 40 ++--
drivers/dma/dpaa2/dpaa2_qdma.c | 10 +-
drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
drivers/dma/idxd/idxd_common.c | 2 +-
drivers/dma/idxd/idxd_pci.c | 6 +-
drivers/dma/ioat/ioat_dmadev.c | 14 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 2 +-
drivers/event/dlb2/dlb2.c | 220 +++++++++---------
drivers/event/dlb2/dlb2_xstats.c | 6 +-
drivers/event/dlb2/pf/dlb2_main.c | 52 ++---
drivers/event/dlb2/pf/dlb2_pf.c | 20 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 14 +-
drivers/event/octeontx/timvf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 30 +--
drivers/event/opdl/opdl_test.c | 116 ++++-----
drivers/event/sw/sw_evdev.c | 22 +-
drivers/event/sw/sw_evdev_xstats.c | 4 +-
drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 8 +-
drivers/mempool/octeontx/octeontx_fpavf.c | 22 +-
.../mempool/octeontx/rte_mempool_octeontx.c | 6 +-
drivers/ml/cnxk/cn10k_ml_dev.c | 32 +--
drivers/ml/cnxk/cnxk_ml_ops.c | 20 +-
drivers/net/atlantic/atl_rxtx.c | 4 +-
drivers/net/atlantic/hw_atl/hw_atl_utils.c | 12 +-
drivers/net/axgbe/axgbe_ethdev.c | 2 +-
drivers/net/bnx2x/bnx2x.c | 8 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 4 +-
drivers/net/bonding/rte_eth_bond_alb.c | 2 +-
drivers/net/bonding/rte_eth_bond_api.c | 4 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 6 +-
drivers/net/cnxk/cnxk_ethdev.c | 4 +-
drivers/net/cnxk/cnxk_ethdev_mcs.c | 14 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 2 +-
drivers/net/cpfl/cpfl_flow_parser.c | 2 +-
drivers/net/cpfl/cpfl_fxp_rule.c | 8 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 16 +-
drivers/net/dpaa2/dpaa2_flow.c | 36 +--
drivers/net/dpaa2/dpaa2_mux.c | 4 +-
drivers/net/dpaa2/dpaa2_recycle.c | 6 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 14 +-
drivers/net/dpaa2/dpaa2_sparser.c | 8 +-
drivers/net/dpaa2/dpaa2_tm.c | 24 +-
drivers/net/e1000/igb_ethdev.c | 2 +-
drivers/net/enetc/enetc_ethdev.c | 4 +-
drivers/net/enetfec/enet_ethdev.c | 4 +-
drivers/net/enetfec/enet_uio.c | 10 +-
drivers/net/enic/enic_ethdev.c | 20 +-
drivers/net/enic/enic_flow.c | 20 +-
drivers/net/enic/enic_vf_representor.c | 16 +-
drivers/net/failsafe/failsafe_args.c | 2 +-
drivers/net/failsafe/failsafe_eal.c | 2 +-
drivers/net/failsafe/failsafe_ether.c | 4 +-
drivers/net/failsafe/failsafe_intr.c | 6 +-
drivers/net/gve/base/gve_adminq.c | 2 +-
drivers/net/hinic/base/hinic_pmd_eqs.c | 2 +-
drivers/net/hinic/base/hinic_pmd_mbox.c | 6 +-
drivers/net/hinic/base/hinic_pmd_niccfg.c | 8 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 4 +-
drivers/net/hns3/hns3_dump.c | 12 +-
drivers/net/hns3/hns3_intr.c | 12 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
drivers/net/hns3/hns3_regs.c | 4 +-
drivers/net/i40e/i40e_ethdev.c | 37 ++-
drivers/net/i40e/i40e_pf.c | 8 +-
drivers/net/i40e/i40e_rxtx.c | 24 +-
drivers/net/iavf/iavf_ethdev.c | 12 +-
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_dcf_vf_representor.c | 14 +-
drivers/net/ice/ice_ethdev.c | 44 ++--
drivers/net/ice/ice_fdir_filter.c | 2 +-
drivers/net/ice/ice_hash.c | 8 +-
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ipn3ke/ipn3ke_ethdev.c | 4 +-
drivers/net/ipn3ke/ipn3ke_flow.c | 23 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 20 +-
drivers/net/ipn3ke/ipn3ke_tm.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 7 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 24 +-
drivers/net/ixgbe/ixgbe_pf.c | 18 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 8 +-
drivers/net/memif/rte_eth_memif.c | 2 +-
drivers/net/mlx4/mlx4.c | 4 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/ngbe/base/ngbe_hw.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 2 +-
drivers/net/ngbe/ngbe_pf.c | 10 +-
drivers/net/octeon_ep/cnxk_ep_tx.c | 2 +-
drivers/net/octeon_ep/cnxk_ep_vf.c | 12 +-
drivers/net/octeon_ep/otx2_ep_vf.c | 18 +-
drivers/net/octeon_ep/otx_ep_common.h | 2 +-
drivers/net/octeon_ep/otx_ep_ethdev.c | 80 +++----
drivers/net/octeon_ep/otx_ep_mbox.c | 30 +--
drivers/net/octeon_ep/otx_ep_rxtx.c | 74 +++---
drivers/net/octeon_ep/otx_ep_vf.c | 20 +-
drivers/net/octeontx/base/octeontx_pkovf.c | 2 +-
drivers/net/octeontx/octeontx_ethdev.c | 4 +-
drivers/net/pcap/pcap_ethdev.c | 4 +-
drivers/net/pfe/pfe_ethdev.c | 22 +-
drivers/net/pfe/pfe_hif.c | 12 +-
drivers/net/pfe/pfe_hif_lib.c | 2 +-
drivers/net/qede/qede_rxtx.c | 66 +++---
drivers/net/thunderx/nicvf_ethdev.c | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 4 +-
drivers/net/txgbe/txgbe_ipsec.c | 24 +-
drivers/net/txgbe/txgbe_pf.c | 20 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 2 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 2 +-
drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c | 14 +-
drivers/raw/ifpga/afu_pmd_n3000.c | 2 +-
drivers/raw/ifpga/ifpga_rawdev.c | 94 ++++----
drivers/regex/cn9k/cn9k_regexdev.c | 2 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 10 +-
drivers/vdpa/nfp/nfp_vdpa.c | 2 +-
171 files changed, 1194 insertions(+), 1211 deletions(-)
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index 292537e24d..9d028f0f48 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -230,7 +230,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
}
rte_bbdev_log_debug(
- "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u\n",
+ "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u",
(d->pf_device) ? "PF" : "VF",
(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -1229,7 +1229,7 @@ acc100_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, ACC100_HARQ_ALIGN_COMP);
if ((harq_layout[harq_index].offset > 0) && harq_prun) {
- rte_bbdev_log_debug("HARQ IN offset unexpected for now\n");
+ rte_bbdev_log_debug("HARQ IN offset unexpected for now");
fcw->hcin_size0 = harq_layout[harq_index].size0;
fcw->hcin_offset = harq_layout[harq_index].offset;
fcw->hcin_size1 = harq_in_length - harq_layout[harq_index].offset;
@@ -2890,7 +2890,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
uint32_t harq_index;
if (harq_in_length == 0) {
- rte_bbdev_log(ERR, "Loopback of invalid null size\n");
+ rte_bbdev_log(ERR, "Loopback of invalid null size");
return -EINVAL;
}
@@ -2928,7 +2928,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
fcw->hcin_en = 1;
fcw->hcout_en = 1;
- rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d\n",
+ rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d",
ddr_mem_in, harq_index,
harq_layout[harq_index].offset, harq_in_length,
harq_dma_length_in);
@@ -2944,7 +2944,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
fcw->hcin_size0 = harq_in_length;
}
harq_layout[harq_index].val = 0;
- rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d\n",
+ rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d",
fcw->hcin_size0, fcw->hcin_offset, fcw->hcin_size1);
fcw->hcout_size0 = harq_in_length;
fcw->hcin_decomp_mode = h_comp;
@@ -3691,7 +3691,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
if (i > 0)
same_op = cmp_ldpc_dec_op(&ops[i-1]);
- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
@@ -3808,7 +3808,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
return -1;
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x num %d\n", desc, rsp.val, desc->req.numCBs);
+ rte_bbdev_log_debug("Resp. desc %p: %x num %d", desc, rsp.val, desc->req.numCBs);
/* Dequeue */
op = desc->req.op_addr;
@@ -3885,7 +3885,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
__ATOMIC_RELAXED);
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d\n",
+ rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d",
desc, rsp.val, descs_in_tb, desc->req.numCBs);
op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
@@ -3981,7 +3981,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
return -1;
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x\n", desc, rsp.val);
+ rte_bbdev_log_debug("Resp. desc %p: %x", desc, rsp.val);
/* Dequeue */
op = desc->req.op_addr;
@@ -4060,7 +4060,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
__ATOMIC_RELAXED);
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d\n",
+ rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d",
desc, rsp.val, cb_idx, cbs_in_tb);
op->status |= ((rsp.input_err) ? (1 << RTE_BBDEV_DATA_ERROR) : 0);
@@ -4797,7 +4797,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
}
if (aram_address > ACC100_WORDS_IN_ARAM_SIZE) {
- rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+ rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
aram_address, ACC100_WORDS_IN_ARAM_SIZE);
return -EINVAL;
}
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 686e086a5c..88e1d03ebf 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -348,7 +348,7 @@ fetch_acc_config(struct rte_bbdev *dev)
}
rte_bbdev_log_debug(
- "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u AQ %u %u %u %u %u %u Len %u %u %u %u %u %u\n",
+ "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u AQ %u %u %u %u %u %u Len %u %u %u %u %u %u",
(d->pf_device) ? "PF" : "VF",
(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -464,7 +464,7 @@ vrb_dev_interrupt_handler(void *cb_arg)
}
} else {
rte_bbdev_log_debug(
- "VRB VF Interrupt received, Info Ring data: 0x%x\n",
+ "VRB VF Interrupt received, Info Ring data: 0x%x",
ring_data->val);
switch (int_nb) {
case ACC_VF_INT_DMA_DL_DESC_IRQ:
@@ -698,7 +698,7 @@ vrb_intr_enable(struct rte_bbdev *dev)
if (d->device_variant == VRB1_VARIANT) {
/* On VRB1: cannot enable MSI/IR to avoid potential back-pressure corner case. */
- rte_bbdev_log(ERR, "VRB1 (%s) doesn't support any MSI/MSI-X interrupt\n",
+ rte_bbdev_log(ERR, "VRB1 (%s) doesn't support any MSI/MSI-X interrupt",
dev->data->name);
return -ENOTSUP;
}
@@ -800,7 +800,7 @@ vrb_intr_enable(struct rte_bbdev *dev)
return 0;
}
- rte_bbdev_log(ERR, "Device (%s) supports only VFIO MSI/MSI-X interrupts\n",
+ rte_bbdev_log(ERR, "Device (%s) supports only VFIO MSI/MSI-X interrupts",
dev->data->name);
return -ENOTSUP;
}
@@ -1023,7 +1023,7 @@ vrb_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
d->queue_offset(d->pf_device, q->vf_id, q->qgrp_id, q->aq_id));
rte_bbdev_log_debug(
- "Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p base %p\n",
+ "Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p base %p",
dev->data->dev_id, queue_id, q->qgrp_id, q->vf_id,
q->aq_id, q->aq_depth, q->mmio_reg_enqueue,
d->mmio_base);
@@ -1076,7 +1076,7 @@ vrb_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
);
} else if (op_type == RTE_BBDEV_OP_MLDTS) {
struct rte_bbdev_mldts_op *op_mldts = (struct rte_bbdev_mldts_op *) op;
- rte_bbdev_log(INFO, " Op MLD %d RBs %d NL %d Rp %d %d %x\n",
+ rte_bbdev_log(INFO, " Op MLD %d RBs %d NL %d Rp %d %d %x",
index,
op_mldts->mldts.num_rbs, op_mldts->mldts.num_layers,
op_mldts->mldts.r_rep,
@@ -2492,7 +2492,7 @@ vrb_enqueue_ldpc_dec_one_op_cb(struct acc_queue *q, struct rte_bbdev_dec_op *op,
hq_output = op->ldpc_dec.harq_combined_output.data;
hq_len = op->ldpc_dec.harq_combined_output.length;
if (unlikely(!mbuf_append(hq_output_head, hq_output, hq_len))) {
- rte_bbdev_log(ERR, "HARQ output mbuf issue %d %d\n",
+ rte_bbdev_log(ERR, "HARQ output mbuf issue %d %d",
hq_output->buf_len,
hq_len);
return -1;
@@ -2985,7 +2985,7 @@ vrb_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
break;
}
avail -= 1;
- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
@@ -3319,7 +3319,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
return -1;
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x %x %x\n", desc, rsp.val, desc->rsp.add_info_0,
+ rte_bbdev_log_debug("Resp. desc %p: %x %x %x", desc, rsp.val, desc->rsp.add_info_0,
desc->rsp.add_info_1);
/* Dequeue. */
@@ -3440,7 +3440,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
}
if (check_bit(op->ldpc_dec.op_flags, RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK)) {
- rte_bbdev_log_debug("TB-CRC Check %x\n", tb_crc_check);
+ rte_bbdev_log_debug("TB-CRC Check %x", tb_crc_check);
if (tb_crc_check > 0)
op->status |= 1 << RTE_BBDEV_CRC_ERROR;
}
@@ -3985,7 +3985,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
layer_idx = RTE_MIN(op->mldts.num_layers - VRB2_MLD_MIN_LAYER,
VRB2_MLD_MAX_LAYER - VRB2_MLD_MIN_LAYER);
rrep_idx = RTE_MIN(op->mldts.r_rep, VRB2_MLD_MAX_RREP);
- rte_bbdev_log_debug("RB %d index %d %d max %d\n", op->mldts.num_rbs, layer_idx, rrep_idx,
+ rte_bbdev_log_debug("RB %d index %d %d max %d", op->mldts.num_rbs, layer_idx, rrep_idx,
max_rb[layer_idx][rrep_idx]);
return (op->mldts.num_rbs <= max_rb[layer_idx][rrep_idx]);
@@ -4650,7 +4650,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
}
if (aram_address > VRB1_WORDS_IN_ARAM_SIZE) {
- rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+ rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
aram_address, VRB1_WORDS_IN_ARAM_SIZE);
return -EINVAL;
}
@@ -5020,7 +5020,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
}
}
if (aram_address > VRB2_WORDS_IN_ARAM_SIZE) {
- rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+ rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
aram_address, VRB2_WORDS_IN_ARAM_SIZE);
return -EINVAL;
}
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6b0644ffc5..d60cd3a5c5 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1498,14 +1498,14 @@ fpga_mutex_acquisition(struct fpga_queue *q)
do {
if (cnt > 0)
usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
- rte_bbdev_log_debug("Acquiring Mutex for %x\n",
+ rte_bbdev_log_debug("Acquiring Mutex for %x",
q->ddr_mutex_uuid);
fpga_reg_write_32(q->d->mmio_base,
FPGA_5GNR_FEC_MUTEX,
mutex_ctrl);
mutex_read = fpga_reg_read_32(q->d->mmio_base,
FPGA_5GNR_FEC_MUTEX);
- rte_bbdev_log_debug("Mutex %x cnt %d owner %x\n",
+ rte_bbdev_log_debug("Mutex %x cnt %d owner %x",
mutex_read, cnt, q->ddr_mutex_uuid);
cnt++;
} while ((mutex_read >> 16) != q->ddr_mutex_uuid);
@@ -1546,7 +1546,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
if (reg_32 < harq_in_length) {
left_length = reg_32;
- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
}
input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
@@ -1609,18 +1609,18 @@ fpga_harq_read_loopback(struct fpga_queue *q,
FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
if (reg < harq_in_length) {
harq_in_length = reg;
- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
}
if (!mbuf_append(harq_output, harq_output, harq_in_length)) {
- rte_bbdev_log(ERR, "HARQ output buffer warning %d %d\n",
+ rte_bbdev_log(ERR, "HARQ output buffer warning %d %d",
harq_output->buf_len -
rte_pktmbuf_headroom(harq_output),
harq_in_length);
harq_in_length = harq_output->buf_len -
rte_pktmbuf_headroom(harq_output);
if (!mbuf_append(harq_output, harq_output, harq_in_length)) {
- rte_bbdev_log(ERR, "HARQ output buffer issue %d %d\n",
+ rte_bbdev_log(ERR, "HARQ output buffer issue %d %d",
harq_output->buf_len, harq_in_length);
return -1;
}
@@ -1642,7 +1642,7 @@ fpga_harq_read_loopback(struct fpga_queue *q,
FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
if (reg == FPGA_DDR_OVERFLOW) {
rte_bbdev_log(ERR,
- "Read address is overflow!\n");
+ "Read address is overflow!");
return -1;
}
}
diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c
index 1a56e73abd..af4b4f1e9a 100644
--- a/drivers/baseband/la12xx/bbdev_la12xx.c
+++ b/drivers/baseband/la12xx/bbdev_la12xx.c
@@ -201,7 +201,7 @@ la12xx_e200_queue_setup(struct rte_bbdev *dev,
q_priv->la12xx_core_id = LA12XX_LDPC_DEC_CORE;
break;
default:
- rte_bbdev_log(ERR, "Unsupported op type\n");
+ rte_bbdev_log(ERR, "Unsupported op type");
return -1;
}
@@ -269,7 +269,7 @@ la12xx_e200_queue_setup(struct rte_bbdev *dev,
ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_dec_queues++);
break;
default:
- rte_bbdev_log(ERR, "Not supported op type\n");
+ rte_bbdev_log(ERR, "Not supported op type");
return -1;
}
ch->op_type = rte_cpu_to_be_32(q_priv->op_type);
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 8ddc7ff05f..a66dcd8962 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -269,7 +269,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
dev_info->num_queues[op_cap->type] = num_queue_per_type;
}
- rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
+ rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
}
/* Release queue */
@@ -1951,7 +1951,7 @@ turbo_sw_bbdev_probe(struct rte_vdev_device *vdev)
parse_turbo_sw_params(&init_params, input_args);
rte_bbdev_log_debug(
- "Initialising %s on NUMA node %d with max queues: %d\n",
+ "Initialising %s on NUMA node %d with max queues: %d",
name, init_params.socket_id, init_params.queues_num);
return turbo_sw_bbdev_create(vdev, &init_params);
diff --git a/drivers/bus/cdx/cdx_vfio.c b/drivers/bus/cdx/cdx_vfio.c
index 79abc3f120..664f267471 100644
--- a/drivers/bus/cdx/cdx_vfio.c
+++ b/drivers/bus/cdx/cdx_vfio.c
@@ -638,7 +638,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
feature->flags |= VFIO_DEVICE_FEATURE_SET;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret) {
- CDX_BUS_ERR("Bus Master configuring not supported for device: %s, error: %d (%s)\n",
+ CDX_BUS_ERR("Bus Master configuring not supported for device: %s, error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
return ret;
@@ -648,7 +648,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
vfio_bm_feature->op = VFIO_DEVICE_FEATURE_SET_MASTER;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret < 0)
- CDX_BUS_ERR("BM Enable Error for device: %s, Error: %d (%s)\n",
+ CDX_BUS_ERR("BM Enable Error for device: %s, Error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
@@ -682,7 +682,7 @@ rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
feature->flags |= VFIO_DEVICE_FEATURE_SET;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret) {
- CDX_BUS_ERR("Bus Master configuring not supported for device: %s, Error: %d (%s)\n",
+ CDX_BUS_ERR("Bus Master configuring not supported for device: %s, Error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
return ret;
@@ -692,7 +692,7 @@ rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
vfio_bm_feature->op = VFIO_DEVICE_FEATURE_CLEAR_MASTER;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret < 0)
- CDX_BUS_ERR("BM Disable Error for device: %s, Error: %d (%s)\n",
+ CDX_BUS_ERR("BM Disable Error for device: %s, Error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..19f6132bba 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -403,7 +403,8 @@ extern int fman_ccsr_map_fd;
#define FMAN_ERR(rc, fmt, args...) \
do { \
_errno = (rc); \
- DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+ rte_log(RTE_LOG_ERR, dpaa_logtype_bus, "dpaa: " fmt "(%d)\n", \
+ ##args, errno); \
} while (0)
#define FMAN_IP_REV_1 0xC30C4
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 89f0f329c0..adb452fd3e 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -499,7 +499,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
const struct rte_dpaa2_device *dstart;
struct rte_dpaa2_device *dev;
- DPAA2_BUS_DEBUG("Finding a device named %s\n", (const char *)data);
+ DPAA2_BUS_DEBUG("Finding a device named %s", (const char *)data);
/* find_device is always called with an opaque object which should be
* passed along to the 'cmp' function iterating over all device obj
@@ -514,7 +514,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
}
while (dev != NULL) {
if (cmp(&dev->device, data) == 0) {
- DPAA2_BUS_DEBUG("Found device (%s)\n",
+ DPAA2_BUS_DEBUG("Found device (%s)",
dev->device.name);
return &dev->device;
}
@@ -628,14 +628,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
/* Expectation is that device would be name=device_name */
if (strncmp(str, "name=", 5) != 0) {
- DPAA2_BUS_DEBUG("Invalid device string (%s)\n", str);
+ DPAA2_BUS_DEBUG("Invalid device string (%s)", str);
return NULL;
}
/* Now that name=device_name format is available, split */
dup = strdup(str);
if (dup == NULL) {
- DPAA2_BUS_DEBUG("Dup string (%s) failed!\n", str);
+ DPAA2_BUS_DEBUG("Dup string (%s) failed!", str);
return NULL;
}
dev_name = dup + strlen("name=");
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 5966776a85..b90efeb651 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -232,7 +232,7 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
/* iova_addr may be set to RTE_BAD_IOVA */
if (iova_addr == RTE_BAD_IOVA) {
- DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
+ DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
cur_len += map_len;
continue;
}
@@ -389,7 +389,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
dma_map.vaddr = vaddr;
dma_map.iova = iova;
- DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64"\n",
+ DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64,
(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
(uint64_t)dma_map.size);
ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
@@ -480,13 +480,13 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
if (ret) {
DPAA2_BUS_ERR(" %s cannot get group status, "
- "error %i (%s)\n", dev_addr,
+ "error %i (%s)", dev_addr,
errno, strerror(errno));
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
- DPAA2_BUS_ERR(" %s VFIO group is not viable!\n", dev_addr);
+ DPAA2_BUS_ERR(" %s VFIO group is not viable!", dev_addr);
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
@@ -503,7 +503,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
&vfio_container_fd);
if (ret) {
DPAA2_BUS_ERR(" %s cannot add VFIO group to container, "
- "error %i (%s)\n", dev_addr,
+ "error %i (%s)", dev_addr,
errno, strerror(errno));
close(vfio_group_fd);
close(vfio_container_fd);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 07256ed7ec..7e858a113f 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -86,7 +86,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
sizeof(struct queue_storage_info_t),
RTE_CACHE_LINE_SIZE);
if (!rxq->q_storage) {
- DPAA2_BUS_ERR("q_storage allocation failed\n");
+ DPAA2_BUS_ERR("q_storage allocation failed");
ret = -ENOMEM;
goto err;
}
@@ -94,7 +94,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t));
ret = dpaa2_alloc_dq_storage(rxq->q_storage);
if (ret) {
- DPAA2_BUS_ERR("dpaa2_alloc_dq_storage failed\n");
+ DPAA2_BUS_ERR("dpaa2_alloc_dq_storage failed");
goto err;
}
}
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index ffb0c61214..11b31eee4f 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -180,7 +180,7 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rawdev->dev_ops->firmware_load &&
rawdev->dev_ops->firmware_load(rawdev,
&afu_pr_conf)){
- IFPGA_BUS_ERR("firmware load error %d\n", ret);
+ IFPGA_BUS_ERR("firmware load error %d", ret);
goto end;
}
afu_dev->id.uuid.uuid_low = afu_pr_conf.afu_id.uuid.uuid_low;
@@ -316,7 +316,7 @@ ifpga_probe_all_drivers(struct rte_afu_device *afu_dev)
/* Check if a driver is already loaded */
if (rte_dev_is_probed(&afu_dev->device)) {
- IFPGA_BUS_DEBUG("Device %s is already probed\n",
+ IFPGA_BUS_DEBUG("Device %s is already probed",
rte_ifpga_device_name(afu_dev));
return -EEXIST;
}
@@ -353,7 +353,7 @@ ifpga_probe(void)
if (ret == -EEXIST)
continue;
if (ret < 0)
- IFPGA_BUS_ERR("failed to initialize %s device\n",
+ IFPGA_BUS_ERR("failed to initialize %s device",
rte_ifpga_device_name(afu_dev));
}
@@ -408,7 +408,7 @@ ifpga_remove_driver(struct rte_afu_device *afu_dev)
name = rte_ifpga_device_name(afu_dev);
if (afu_dev->driver == NULL) {
- IFPGA_BUS_DEBUG("no driver attach to device %s\n", name);
+ IFPGA_BUS_DEBUG("no driver attach to device %s", name);
return 1;
}
diff --git a/drivers/bus/vdev/vdev_params.c b/drivers/bus/vdev/vdev_params.c
index 51583fe949..68ae09e2e9 100644
--- a/drivers/bus/vdev/vdev_params.c
+++ b/drivers/bus/vdev/vdev_params.c
@@ -53,7 +53,7 @@ rte_vdev_dev_iterate(const void *start,
if (str != NULL) {
kvargs = rte_kvargs_parse(str, vdev_params_keys);
if (kvargs == NULL) {
- VDEV_LOG(ERR, "cannot parse argument list\n");
+ VDEV_LOG(ERR, "cannot parse argument list");
rte_errno = EINVAL;
return NULL;
}
diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c
index b9139c6e6c..8a965d10d9 100644
--- a/drivers/bus/vmbus/vmbus_common.c
+++ b/drivers/bus/vmbus/vmbus_common.c
@@ -108,7 +108,7 @@ vmbus_probe_one_driver(struct rte_vmbus_driver *dr,
/* no initialization when marked as blocked, return without error */
if (dev->device.devargs != NULL &&
dev->device.devargs->policy == RTE_DEV_BLOCKED) {
- VMBUS_LOG(INFO, " Device is blocked, not initializing\n");
+ VMBUS_LOG(INFO, " Device is blocked, not initializing");
return 1;
}
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 14aff233d5..35eb8b7628 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -1493,7 +1493,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
rc = plt_thread_create_control(&dev->sync.pfvf_msg_thread, name,
pf_vf_mbox_thread_main, dev);
if (rc != 0) {
- plt_err("Failed to create thread for VF mbox handling\n");
+ plt_err("Failed to create thread for VF mbox handling");
goto thread_fail;
}
}
diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/roc_model.c
index 6dc2afe7f0..446ab3d2bd 100644
--- a/drivers/common/cnxk/roc_model.c
+++ b/drivers/common/cnxk/roc_model.c
@@ -153,7 +153,7 @@ cn10k_part_pass_get(uint32_t *part, uint32_t *pass)
dir = opendir(SYSFS_PCI_DEVICES);
if (dir == NULL) {
- plt_err("%s(): opendir failed: %s\n", __func__,
+ plt_err("%s(): opendir failed: %s", __func__,
strerror(errno));
return -errno;
}
diff --git a/drivers/common/cnxk/roc_nix_ops.c b/drivers/common/cnxk/roc_nix_ops.c
index 9e66ad1a49..efb0a41d07 100644
--- a/drivers/common/cnxk/roc_nix_ops.c
+++ b/drivers/common/cnxk/roc_nix_ops.c
@@ -220,7 +220,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
}
- plt_nix_dbg("tcpv4 lso fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);
/*
* IPv6/TCP LSO
@@ -240,7 +240,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
}
- plt_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tcpv6 lso fmt=%u", rsp->lso_format_idx);
/*
* IPv4/UDP/TUN HDR/IPv4/TCP LSO
@@ -256,7 +256,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v4v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv4/UDP/TUN HDR/IPv6/TCP LSO
@@ -272,7 +272,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v4v6 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/UDP/TUN HDR/IPv4/TCP LSO
@@ -288,7 +288,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v6v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/UDP/TUN HDR/IPv6/TCP LSO
@@ -304,7 +304,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v6v6 fmt=%u", rsp->lso_format_idx);
/*
* IPv4/TUN HDR/IPv4/TCP LSO
@@ -320,7 +320,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- plt_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v4v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv4/TUN HDR/IPv6/TCP LSO
@@ -336,7 +336,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- plt_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v4v6 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/TUN HDR/IPv4/TCP LSO
@@ -352,7 +352,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- plt_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v6v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/TUN HDR/IPv6/TCP LSO
@@ -369,7 +369,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- plt_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v6v6 fmt=%u", rsp->lso_format_idx);
rc = 0;
exit:
mbox_put(mbox);
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 9e5e614b3b..92401e04d0 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -906,7 +906,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
if (rc) {
roc_nix_tm_dump(sq->roc_nix, NULL);
roc_nix_queues_ctx_dump(sq->roc_nix, NULL);
- plt_err("Failed to drain sq %u, rc=%d\n", sq->qid, rc);
+ plt_err("Failed to drain sq %u, rc=%d", sq->qid, rc);
return rc;
}
/* Freed all pending SQEs for this SQ, so disable this node */
diff --git a/drivers/common/cnxk/roc_nix_tm_mark.c b/drivers/common/cnxk/roc_nix_tm_mark.c
index e9a7604e79..092d0851b9 100644
--- a/drivers/common/cnxk/roc_nix_tm_mark.c
+++ b/drivers/common/cnxk/roc_nix_tm_mark.c
@@ -266,7 +266,7 @@ nix_tm_mark_init(struct nix *nix)
}
nix->tm_markfmt[i][j] = rsp->mark_format_idx;
- plt_tm_dbg("Mark type: %u, Mark Color:%u, id:%u\n", i,
+ plt_tm_dbg("Mark type: %u, Mark Color:%u, id:%u", i,
j, nix->tm_markfmt[i][j]);
}
}
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index e1cef7a670..c1b91ad92f 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -503,7 +503,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
/* Wait for sq entries to be flushed */
rc = roc_nix_tm_sq_flush_spin(sq);
if (rc) {
- plt_err("Failed to drain sq, rc=%d\n", rc);
+ plt_err("Failed to drain sq, rc=%d", rc);
goto cleanup;
}
}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index 8e3da95a45..4a09cc2aae 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -583,7 +583,7 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
/* Configure TL4 to send to SDP channel instead of CGX/LBK */
if (nix->sdp_link) {
relchan = nix->tx_chan_base & 0xff;
- plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u\n", relchan, schq,
+ plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u", relchan, schq,
nix->tx_chan_cnt);
reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
regval[k] = BIT_ULL(12);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 748d287bad..b02c9c7f38 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -171,7 +171,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
mbox_alloc_msg_free_rsrc_cnt(mbox);
rc = mbox_process_msg(mbox, (void **)&rsrc_cnt);
if (rc) {
- plt_err("Failed to get free resource count\n");
+ plt_err("Failed to get free resource count");
rc = -EIO;
goto exit;
}
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index f8607b2852..d39af3c85e 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -317,7 +317,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
mbox_alloc_msg_free_rsrc_cnt(mbox);
rc = mbox_process_msg(mbox, (void **)&rsrc_cnt);
if (rc) {
- plt_err("Failed to get free resource count\n");
+ plt_err("Failed to get free resource count");
mbox_put(mbox);
return -EIO;
}
diff --git a/drivers/common/cpt/cpt_ucode.h b/drivers/common/cpt/cpt_ucode.h
index b393be4cf6..2e6846312b 100644
--- a/drivers/common/cpt/cpt_ucode.h
+++ b/drivers/common/cpt/cpt_ucode.h
@@ -2589,7 +2589,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform,
sess->cpt_op |= CPT_OP_CIPHER_DECRYPT;
sess->cpt_op |= CPT_OP_AUTH_VERIFY;
} else {
- CPT_LOG_DP_ERR("Unknown aead operation\n");
+ CPT_LOG_DP_ERR("Unknown aead operation");
return -1;
}
switch (aead_form->algo) {
@@ -2658,7 +2658,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform,
ctx->dec_auth = 1;
}
} else {
- CPT_LOG_DP_ERR("Unknown cipher operation\n");
+ CPT_LOG_DP_ERR("Unknown cipher operation");
return -1;
}
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index f6be84ceb5..105450774e 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -9,7 +9,7 @@
extern int idpf_common_logtype;
-#define DRV_LOG_RAW(level, ...) \
+#define DRV_LOG(level, ...) \
rte_log(RTE_LOG_ ## level, \
idpf_common_logtype, \
RTE_FMT("%s(): " \
@@ -17,9 +17,6 @@ extern int idpf_common_logtype;
__func__, \
RTE_FMT_TAIL(__VA_ARGS__,)))
-#define DRV_LOG(level, fmt, args...) \
- DRV_LOG_RAW(level, fmt "\n", ## args)
-
#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
#define RX_LOG(level, ...) \
RTE_LOG(level, \
diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c
index 4fd3fda721..f98942c79c 100644
--- a/drivers/common/octeontx/octeontx_mbox.c
+++ b/drivers/common/octeontx/octeontx_mbox.c
@@ -264,7 +264,7 @@ octeontx_start_domain(void)
result = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0);
if (result != 0) {
- mbox_log_err("Could not start domain. Err=%d. FuncErr=%d\n",
+ mbox_log_err("Could not start domain. Err=%d. FuncErr=%d",
result, hdr.res_code);
result = -EINVAL;
}
@@ -288,7 +288,7 @@ octeontx_check_mbox_version(struct mbox_intf_ver *app_intf_ver,
sizeof(struct mbox_intf_ver),
&kernel_intf_ver, sizeof(kernel_intf_ver));
if (result != sizeof(kernel_intf_ver)) {
- mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d\n",
+ mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d",
result, hdr.res_code);
result = -EINVAL;
}
diff --git a/drivers/common/qat/qat_pf2vf.c b/drivers/common/qat/qat_pf2vf.c
index 621f12fce2..9b25fdc6a0 100644
--- a/drivers/common/qat/qat_pf2vf.c
+++ b/drivers/common/qat/qat_pf2vf.c
@@ -36,7 +36,7 @@ int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev,
}
if ((pf2vf_msg.msg_type & type_mask) != pf2vf_msg.msg_type) {
- QAT_LOG(ERR, "PF2VF message type 0x%X out of range\n",
+ QAT_LOG(ERR, "PF2VF message type 0x%X out of range",
pf2vf_msg.msg_type);
return -EINVAL;
}
@@ -65,7 +65,7 @@ int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev,
(++count < ADF_IOV_MSG_ACK_MAX_RETRY));
if (val & ADF_PFVF_INT) {
- QAT_LOG(ERR, "ACK not received from remote\n");
+ QAT_LOG(ERR, "ACK not received from remote");
return -EIO;
}
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index f95dd33375..21a110d22e 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -267,7 +267,7 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
if (qat_qp_check_queue_alignment(queue->base_phys_addr,
queue_size_bytes)) {
QAT_LOG(ERR, "Invalid alignment on queue create "
- " 0x%"PRIx64"\n",
+ " 0x%"PRIx64,
queue->base_phys_addr);
ret = -EFAULT;
goto queue_create_err;
diff --git a/drivers/compress/isal/isal_compress_pmd.c b/drivers/compress/isal/isal_compress_pmd.c
index cb23e929ed..0e783243a8 100644
--- a/drivers/compress/isal/isal_compress_pmd.c
+++ b/drivers/compress/isal/isal_compress_pmd.c
@@ -42,10 +42,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
/* Set private xform algorithm */
if (xform->compress.algo != RTE_COMP_ALGO_DEFLATE) {
if (xform->compress.algo == RTE_COMP_ALGO_NULL) {
- ISAL_PMD_LOG(ERR, "By-pass not supported\n");
+ ISAL_PMD_LOG(ERR, "By-pass not supported");
return -ENOTSUP;
}
- ISAL_PMD_LOG(ERR, "Algorithm not supported\n");
+ ISAL_PMD_LOG(ERR, "Algorithm not supported");
return -ENOTSUP;
}
priv_xform->compress.algo = RTE_COMP_ALGO_DEFLATE;
@@ -55,7 +55,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
priv_xform->compress.window_size =
RTE_COMP_ISAL_WINDOW_SIZE;
else {
- ISAL_PMD_LOG(ERR, "Window size not supported\n");
+ ISAL_PMD_LOG(ERR, "Window size not supported");
return -ENOTSUP;
}
@@ -74,7 +74,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
RTE_COMP_HUFFMAN_DYNAMIC;
break;
default:
- ISAL_PMD_LOG(ERR, "Huffman code not supported\n");
+ ISAL_PMD_LOG(ERR, "Huffman code not supported");
return -ENOTSUP;
}
@@ -92,10 +92,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
break;
case(RTE_COMP_CHECKSUM_CRC32_ADLER32):
ISAL_PMD_LOG(ERR, "Combined CRC and ADLER checksum not"
- " supported\n");
+ " supported");
return -ENOTSUP;
default:
- ISAL_PMD_LOG(ERR, "Checksum type not supported\n");
+ ISAL_PMD_LOG(ERR, "Checksum type not supported");
priv_xform->compress.chksum = IGZIP_DEFLATE;
break;
}
@@ -105,21 +105,21 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
*/
if (xform->compress.level < RTE_COMP_LEVEL_PMD_DEFAULT ||
xform->compress.level > RTE_COMP_LEVEL_MAX) {
- ISAL_PMD_LOG(ERR, "Compression level out of range\n");
+ ISAL_PMD_LOG(ERR, "Compression level out of range");
return -EINVAL;
}
/* Check for Compressdev API level 0, No compression
* not supported in ISA-L
*/
else if (xform->compress.level == RTE_COMP_LEVEL_NONE) {
- ISAL_PMD_LOG(ERR, "No Compression not supported\n");
+ ISAL_PMD_LOG(ERR, "No Compression not supported");
return -ENOTSUP;
}
/* If using fixed huffman code, level must be 0 */
else if (priv_xform->compress.deflate.huffman ==
RTE_COMP_HUFFMAN_FIXED) {
ISAL_PMD_LOG(DEBUG, "ISA-L level 0 used due to a"
- " fixed huffman code\n");
+ " fixed huffman code");
priv_xform->compress.level = RTE_COMP_ISAL_LEVEL_ZERO;
priv_xform->level_buffer_size =
ISAL_DEF_LVL0_DEFAULT;
@@ -169,7 +169,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
ISAL_PMD_LOG(DEBUG, "Requested ISA-L level"
" 3 or above; Level 3 optimized"
" for AVX512 & AVX2 only."
- " level changed to 2.\n");
+ " level changed to 2.");
priv_xform->compress.level =
RTE_COMP_ISAL_LEVEL_TWO;
priv_xform->level_buffer_size =
@@ -188,10 +188,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
/* Set private xform algorithm */
if (xform->decompress.algo != RTE_COMP_ALGO_DEFLATE) {
if (xform->decompress.algo == RTE_COMP_ALGO_NULL) {
- ISAL_PMD_LOG(ERR, "By pass not supported\n");
+ ISAL_PMD_LOG(ERR, "By pass not supported");
return -ENOTSUP;
}
- ISAL_PMD_LOG(ERR, "Algorithm not supported\n");
+ ISAL_PMD_LOG(ERR, "Algorithm not supported");
return -ENOTSUP;
}
priv_xform->decompress.algo = RTE_COMP_ALGO_DEFLATE;
@@ -210,10 +210,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
break;
case(RTE_COMP_CHECKSUM_CRC32_ADLER32):
ISAL_PMD_LOG(ERR, "Combined CRC and ADLER checksum not"
- " supported\n");
+ " supported");
return -ENOTSUP;
default:
- ISAL_PMD_LOG(ERR, "Checksum type not supported\n");
+ ISAL_PMD_LOG(ERR, "Checksum type not supported");
priv_xform->decompress.chksum = ISAL_DEFLATE;
break;
}
@@ -223,7 +223,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
priv_xform->decompress.window_size =
RTE_COMP_ISAL_WINDOW_SIZE;
else {
- ISAL_PMD_LOG(ERR, "Window size not supported\n");
+ ISAL_PMD_LOG(ERR, "Window size not supported");
return -ENOTSUP;
}
}
@@ -263,7 +263,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
remaining_offset);
if (unlikely(!qp->stream->next_in || !qp->stream->next_out)) {
- ISAL_PMD_LOG(ERR, "Invalid source or destination buffer\n");
+ ISAL_PMD_LOG(ERR, "Invalid source or destination buffer");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -279,7 +279,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
remaining_data = op->src.length - qp->stream->total_in;
if (ret != COMP_OK) {
- ISAL_PMD_LOG(ERR, "Compression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Compression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -294,7 +294,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
RTE_MIN(remaining_data, src->data_len);
} else {
ISAL_PMD_LOG(ERR,
- "Not enough input buffer segments\n");
+ "Not enough input buffer segments");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -309,7 +309,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
qp->stream->avail_out = dst->data_len;
} else {
ISAL_PMD_LOG(ERR,
- "Not enough output buffer segments\n");
+ "Not enough output buffer segments");
op->status =
RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return -1;
@@ -378,14 +378,14 @@ chained_mbuf_decompression(struct rte_comp_op *op, struct isal_comp_qp *qp)
if (ret == ISAL_OUT_OVERFLOW) {
ISAL_PMD_LOG(ERR, "Decompression operation ran "
- "out of space, but can be recovered.\n%d bytes "
- "consumed\t%d bytes produced\n",
+ "out of space, but can be recovered.%d bytes "
+ "consumed\t%d bytes produced",
consumed_data, qp->state->total_out);
op->status =
RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE;
return ret;
} else if (ret < 0) {
- ISAL_PMD_LOG(ERR, "Decompression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Decompression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -399,7 +399,7 @@ chained_mbuf_decompression(struct rte_comp_op *op, struct isal_comp_qp *qp)
qp->state->avail_out = dst->data_len;
} else {
ISAL_PMD_LOG(ERR,
- "Not enough output buffer segments\n");
+ "Not enough output buffer segments");
op->status =
RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return -1;
@@ -451,14 +451,14 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
IGZIP_HUFFTABLE_DEFAULT);
if (op->m_src->pkt_len < (op->src.length + op->src.offset)) {
- ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.\n");
+ ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
if (op->dst.offset >= op->m_dst->pkt_len) {
ISAL_PMD_LOG(ERR, "Output mbuf(s) not big enough"
- " for offset provided.\n");
+ " for offset provided.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -483,7 +483,7 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
if (unlikely(!qp->stream->next_in || !qp->stream->next_out)) {
ISAL_PMD_LOG(ERR, "Invalid source or destination"
- " buffers\n");
+ " buffers");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -493,7 +493,7 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
/* Check that output buffer did not run out of space */
if (ret == STATELESS_OVERFLOW) {
- ISAL_PMD_LOG(ERR, "Output buffer not big enough\n");
+ ISAL_PMD_LOG(ERR, "Output buffer not big enough");
op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return ret;
}
@@ -501,13 +501,13 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
/* Check that input buffer has been fully consumed */
if (qp->stream->avail_in != (uint32_t)0) {
ISAL_PMD_LOG(ERR, "Input buffer could not be read"
- " entirely\n");
+ " entirely");
op->status = RTE_COMP_OP_STATUS_ERROR;
return -1;
}
if (ret != COMP_OK) {
- ISAL_PMD_LOG(ERR, "Compression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Compression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -543,14 +543,14 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
qp->state->crc_flag = priv_xform->decompress.chksum;
if (op->m_src->pkt_len < (op->src.length + op->src.offset)) {
- ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.\n");
+ ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
if (op->dst.offset >= op->m_dst->pkt_len) {
ISAL_PMD_LOG(ERR, "Output mbuf not big enough for "
- "offset provided.\n");
+ "offset provided.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -574,7 +574,7 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
if (unlikely(!qp->state->next_in || !qp->state->next_out)) {
ISAL_PMD_LOG(ERR, "Invalid source or destination"
- " buffers\n");
+ " buffers");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -583,7 +583,7 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
ret = isal_inflate_stateless(qp->state);
if (ret == ISAL_OUT_OVERFLOW) {
- ISAL_PMD_LOG(ERR, "Output buffer not big enough\n");
+ ISAL_PMD_LOG(ERR, "Output buffer not big enough");
op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return ret;
}
@@ -591,13 +591,13 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
/* Check that input buffer has been fully consumed */
if (qp->state->avail_in != (uint32_t)0) {
ISAL_PMD_LOG(ERR, "Input buffer could not be read"
- " entirely\n");
+ " entirely");
op->status = RTE_COMP_OP_STATUS_ERROR;
return -1;
}
if (ret != ISAL_DECOMP_OK && ret != ISAL_END_INPUT) {
- ISAL_PMD_LOG(ERR, "Decompression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Decompression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -622,7 +622,7 @@ process_op(struct isal_comp_qp *qp, struct rte_comp_op *op,
process_isal_inflate(op, qp, priv_xform);
break;
default:
- ISAL_PMD_LOG(ERR, "Operation Not Supported\n");
+ ISAL_PMD_LOG(ERR, "Operation Not Supported");
return -ENOTSUP;
}
return 0;
@@ -641,7 +641,7 @@ isal_comp_pmd_enqueue_burst(void *queue_pair, struct rte_comp_op **ops,
for (i = 0; i < num_enq; i++) {
if (unlikely(ops[i]->op_type != RTE_COMP_OP_STATELESS)) {
ops[i]->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ISAL_PMD_LOG(ERR, "Stateful operation not Supported\n");
+ ISAL_PMD_LOG(ERR, "Stateful operation not Supported");
qp->qp_stats.enqueue_err_count++;
continue;
}
@@ -696,7 +696,7 @@ compdev_isal_create(const char *name, struct rte_vdev_device *vdev,
dev->dequeue_burst = isal_comp_pmd_dequeue_burst;
dev->enqueue_burst = isal_comp_pmd_enqueue_burst;
- ISAL_PMD_LOG(INFO, "\nISA-L library version used: "ISAL_VERSION_STRING);
+ ISAL_PMD_LOG(INFO, "ISA-L library version used: "ISAL_VERSION_STRING);
return 0;
}
@@ -739,7 +739,7 @@ compdev_isal_probe(struct rte_vdev_device *dev)
retval = rte_compressdev_pmd_parse_input_args(&init_params, args);
if (retval) {
ISAL_PMD_LOG(ERR,
- "Failed to parse initialisation arguments[%s]\n", args);
+ "Failed to parse initialisation arguments[%s]", args);
return -EINVAL;
}
diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h
index 7391360925..d52f937548 100644
--- a/drivers/compress/octeontx/otx_zip.h
+++ b/drivers/compress/octeontx/otx_zip.h
@@ -206,7 +206,7 @@ zipvf_prepare_sgl(struct rte_mbuf *buf, int64_t offset, struct zipvf_sginfo *sg_
break;
}
- ZIP_PMD_LOG(DEBUG, "ZIP SGL buf[%d], len = %d, iova = 0x%"PRIx64"\n",
+ ZIP_PMD_LOG(DEBUG, "ZIP SGL buf[%d], len = %d, iova = 0x%"PRIx64,
sgidx, sginfo[sgidx].sg_ctl.s.length, sginfo[sgidx].sg_addr.s.addr);
++sgidx;
}
@@ -219,7 +219,7 @@ zipvf_prepare_sgl(struct rte_mbuf *buf, int64_t offset, struct zipvf_sginfo *sg_
}
qp->num_sgbuf = ++sgidx;
- ZIP_PMD_LOG(DEBUG, "Tot_buf_len:%d max_segs:%"PRIx64"\n", tot_buf_len,
+ ZIP_PMD_LOG(DEBUG, "Tot_buf_len:%d max_segs:%"PRIx64, tot_buf_len,
qp->num_sgbuf);
return ret;
}
@@ -246,7 +246,7 @@ zipvf_prepare_in_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_com
inst->s.inp_ptr_ctl.s.length = qp->num_sgbuf;
inst->s.inp_ptr_ctl.s.fw = 0;
- ZIP_PMD_LOG(DEBUG, "Gather(input): len(nb_segs):%d, iova: 0x%"PRIx64"\n",
+ ZIP_PMD_LOG(DEBUG, "Gather(input): len(nb_segs):%d, iova: 0x%"PRIx64,
inst->s.inp_ptr_ctl.s.length, inst->s.inp_ptr_addr.s.addr);
return ret;
}
@@ -256,7 +256,7 @@ zipvf_prepare_in_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_com
inst->s.inp_ptr_addr.s.addr = rte_pktmbuf_iova_offset(m_src, offset);
inst->s.inp_ptr_ctl.s.length = inlen;
- ZIP_PMD_LOG(DEBUG, "Direct input - inlen:%d\n", inlen);
+ ZIP_PMD_LOG(DEBUG, "Direct input - inlen:%d", inlen);
return ret;
}
@@ -282,7 +282,7 @@ zipvf_prepare_out_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_co
inst->s.out_ptr_addr.s.addr = rte_mem_virt2iova(qp->s_info);
inst->s.out_ptr_ctl.s.length = qp->num_sgbuf;
- ZIP_PMD_LOG(DEBUG, "Scatter(output): nb_segs:%d, iova:0x%"PRIx64"\n",
+ ZIP_PMD_LOG(DEBUG, "Scatter(output): nb_segs:%d, iova:0x%"PRIx64,
inst->s.out_ptr_ctl.s.length, inst->s.out_ptr_addr.s.addr);
return ret;
}
@@ -296,7 +296,7 @@ zipvf_prepare_out_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_co
inst->s.out_ptr_ctl.s.length = inst->s.totaloutputlength;
- ZIP_PMD_LOG(DEBUG, "Direct output - outlen:%d\n", inst->s.totaloutputlength);
+ ZIP_PMD_LOG(DEBUG, "Direct output - outlen:%d", inst->s.totaloutputlength);
return ret;
}
diff --git a/drivers/compress/octeontx/otx_zip_pmd.c b/drivers/compress/octeontx/otx_zip_pmd.c
index fd20139da6..c8f456b319 100644
--- a/drivers/compress/octeontx/otx_zip_pmd.c
+++ b/drivers/compress/octeontx/otx_zip_pmd.c
@@ -161,7 +161,7 @@ zip_set_stream_parameters(struct rte_compressdev *dev,
*/
} else {
- ZIP_PMD_ERR("\nxform type not supported");
+ ZIP_PMD_ERR("xform type not supported");
ret = -1;
goto err;
}
@@ -527,7 +527,7 @@ zip_pmd_enqueue_burst(void *queue_pair,
}
qp->enqed = enqd;
- ZIP_PMD_LOG(DEBUG, "ops_enqd[nb_ops:%d]:%d\n", nb_ops, enqd);
+ ZIP_PMD_LOG(DEBUG, "ops_enqd[nb_ops:%d]:%d", nb_ops, enqd);
return enqd;
}
@@ -563,7 +563,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
op->status = RTE_COMP_OP_STATUS_SUCCESS;
} else {
/* FATAL error cannot do anything */
- ZIP_PMD_ERR("operation failed with error code:%d\n",
+ ZIP_PMD_ERR("operation failed with error code:%d",
zresult->s.compcode);
if (zresult->s.compcode == ZIP_COMP_E_DSTOP)
op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
@@ -571,7 +571,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
op->status = RTE_COMP_OP_STATUS_ERROR;
}
- ZIP_PMD_LOG(DEBUG, "written %d\n", zresult->s.totalbyteswritten);
+ ZIP_PMD_LOG(DEBUG, "written %d", zresult->s.totalbyteswritten);
/* Update op stats */
switch (op->status) {
@@ -582,7 +582,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
op->produced = zresult->s.totalbyteswritten;
break;
default:
- ZIP_PMD_ERR("stats not updated for status:%d\n",
+ ZIP_PMD_ERR("stats not updated for status:%d",
op->status);
break;
}
@@ -598,7 +598,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
rte_mempool_put(qp->vf->sg_mp, qp->s_info);
}
- ZIP_PMD_LOG(DEBUG, "ops_deqd[nb_ops:%d]: %d\n", nb_ops, nb_dequeued);
+ ZIP_PMD_LOG(DEBUG, "ops_deqd[nb_ops:%d]: %d", nb_ops, nb_dequeued);
return nb_dequeued;
}
@@ -676,7 +676,7 @@ zip_pci_remove(struct rte_pci_device *pci_dev)
char compressdev_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
if (pci_dev == NULL) {
- ZIP_PMD_ERR(" Invalid PCI Device\n");
+ ZIP_PMD_ERR(" Invalid PCI Device");
return -EINVAL;
}
rte_pci_device_name(&pci_dev->addr, compressdev_name,
diff --git a/drivers/compress/zlib/zlib_pmd.c b/drivers/compress/zlib/zlib_pmd.c
index 98abd41013..92e808e78c 100644
--- a/drivers/compress/zlib/zlib_pmd.c
+++ b/drivers/compress/zlib/zlib_pmd.c
@@ -29,13 +29,13 @@ process_zlib_deflate(struct rte_comp_op *op, z_stream *strm)
break;
default:
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ZLIB_PMD_ERR("Invalid flush value\n");
+ ZLIB_PMD_ERR("Invalid flush value");
return;
}
if (unlikely(!strm)) {
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ZLIB_PMD_ERR("Invalid z_stream\n");
+ ZLIB_PMD_ERR("Invalid z_stream");
return;
}
/* Update z_stream with the inputs provided by application */
@@ -98,7 +98,7 @@ def_end:
op->produced += strm->total_out;
break;
default:
- ZLIB_PMD_ERR("stats not updated for status:%d\n",
+ ZLIB_PMD_ERR("stats not updated for status:%d",
op->status);
}
@@ -114,7 +114,7 @@ process_zlib_inflate(struct rte_comp_op *op, z_stream *strm)
if (unlikely(!strm)) {
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ZLIB_PMD_ERR("Invalid z_stream\n");
+ ZLIB_PMD_ERR("Invalid z_stream");
return;
}
strm->next_in = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *,
@@ -184,7 +184,7 @@ inf_end:
op->produced += strm->total_out;
break;
default:
- ZLIB_PMD_ERR("stats not produced for status:%d\n",
+ ZLIB_PMD_ERR("stats not produced for status:%d",
op->status);
}
@@ -203,7 +203,7 @@ process_zlib_op(struct zlib_qp *qp, struct rte_comp_op *op)
(op->dst.offset > rte_pktmbuf_data_len(op->m_dst))) {
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
ZLIB_PMD_ERR("Invalid source or destination buffers or "
- "invalid Operation requested\n");
+ "invalid Operation requested");
} else {
private_xform = (struct zlib_priv_xform *)op->private_xform;
stream = &private_xform->stream;
@@ -238,7 +238,7 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
wbits = -(xform->compress.window_size);
break;
default:
- ZLIB_PMD_ERR("Compression algorithm not supported\n");
+ ZLIB_PMD_ERR("Compression algorithm not supported");
return -1;
}
/** Compression Level */
@@ -260,7 +260,7 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
if (level < RTE_COMP_LEVEL_MIN ||
level > RTE_COMP_LEVEL_MAX) {
ZLIB_PMD_ERR("Compression level %d "
- "not supported\n",
+ "not supported",
level);
return -1;
}
@@ -278,13 +278,13 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
strategy = Z_DEFAULT_STRATEGY;
break;
default:
- ZLIB_PMD_ERR("Compression strategy not supported\n");
+ ZLIB_PMD_ERR("Compression strategy not supported");
return -1;
}
if (deflateInit2(strm, level,
Z_DEFLATED, wbits,
DEF_MEM_LEVEL, strategy) != Z_OK) {
- ZLIB_PMD_ERR("Deflate init failed\n");
+ ZLIB_PMD_ERR("Deflate init failed");
return -1;
}
break;
@@ -298,12 +298,12 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
wbits = -(xform->decompress.window_size);
break;
default:
- ZLIB_PMD_ERR("Compression algorithm not supported\n");
+ ZLIB_PMD_ERR("Compression algorithm not supported");
return -1;
}
if (inflateInit2(strm, wbits) != Z_OK) {
- ZLIB_PMD_ERR("Inflate init failed\n");
+ ZLIB_PMD_ERR("Inflate init failed");
return -1;
}
break;
@@ -395,7 +395,7 @@ zlib_probe(struct rte_vdev_device *vdev)
retval = rte_compressdev_pmd_parse_input_args(&init_params, input_args);
if (retval < 0) {
ZLIB_PMD_LOG(ERR,
- "Failed to parse initialisation arguments[%s]\n",
+ "Failed to parse initialisation arguments[%s]",
input_args);
return -EINVAL;
}
diff --git a/drivers/compress/zlib/zlib_pmd_ops.c b/drivers/compress/zlib/zlib_pmd_ops.c
index 445a3baa67..a530d15119 100644
--- a/drivers/compress/zlib/zlib_pmd_ops.c
+++ b/drivers/compress/zlib/zlib_pmd_ops.c
@@ -48,8 +48,8 @@ zlib_pmd_config(struct rte_compressdev *dev,
NULL, config->socket_id,
0);
if (mp == NULL) {
- ZLIB_PMD_ERR("Cannot create private xform pool on "
- "socket %d\n", config->socket_id);
+ ZLIB_PMD_ERR("Cannot create private xform pool on socket %d",
+ config->socket_id);
return -ENOMEM;
}
internals->mp = mp;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index d1ede5e990..59e39a6c14 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -142,7 +142,7 @@ bcmfs_queue_create(struct bcmfs_queue *queue,
if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
BCMFS_LOG(ERR, "Invalid alignment on queue create "
- " 0x%" PRIx64 "\n",
+ " 0x%" PRIx64,
queue->base_phys_addr);
ret = -EFAULT;
goto queue_create_err;
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 78272d616c..d3b1e25d57 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -217,7 +217,7 @@ bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
cdev->data->queue_pairs[qp_id] = qp;
- BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+ BCMFS_LOG(NOTICE, "queue %d setup done", qp_id);
return 0;
}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
index 40813d1fe5..64bd4a317a 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -192,7 +192,7 @@ crypto_set_session_parameters(struct bcmfs_sym_session *sess,
rc = -EINVAL;
break;
default:
- BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ BCMFS_DP_LOG(ERR, "Invalid chain order");
rc = -EINVAL;
break;
}
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index b55258689b..1713600db7 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -309,7 +309,7 @@ caam_jr_prep_cdb(struct caam_jr_session *ses)
cdb = caam_jr_dma_mem_alloc(L1_CACHE_BYTES, sizeof(struct sec_cdb));
if (!cdb) {
- CAAM_JR_ERR("failed to allocate memory for cdb\n");
+ CAAM_JR_ERR("failed to allocate memory for cdb");
return -1;
}
@@ -606,7 +606,7 @@ hw_poll_job_ring(struct sec_job_ring_t *job_ring,
/*TODO for multiple ops, packets*/
ctx = container_of(current_desc, struct caam_jr_op_ctx, jobdes);
if (unlikely(sec_error_code)) {
- CAAM_JR_ERR("desc at cidx %d generated error 0x%x\n",
+ CAAM_JR_ERR("desc at cidx %d generated error 0x%x",
job_ring->cidx, sec_error_code);
hw_handle_job_ring_error(job_ring, sec_error_code);
//todo improve with exact errors
@@ -1368,7 +1368,7 @@ caam_jr_enqueue_op(struct rte_crypto_op *op, struct caam_jr_qp *qp)
}
if (unlikely(!ses->qp || ses->qp != qp)) {
- CAAM_JR_DP_DEBUG("Old:sess->qp=%p New qp = %p\n", ses->qp, qp);
+ CAAM_JR_DP_DEBUG("Old:sess->qp=%p New qp = %p", ses->qp, qp);
ses->qp = qp;
caam_jr_prep_cdb(ses);
}
@@ -1554,7 +1554,7 @@ caam_jr_cipher_init(struct rte_cryptodev *dev __rte_unused,
session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
RTE_CACHE_LINE_SIZE);
if (session->cipher_key.data == NULL && xform->cipher.key.length > 0) {
- CAAM_JR_ERR("No Memory for cipher key\n");
+ CAAM_JR_ERR("No Memory for cipher key");
return -ENOMEM;
}
session->cipher_key.length = xform->cipher.key.length;
@@ -1576,7 +1576,7 @@ caam_jr_auth_init(struct rte_cryptodev *dev __rte_unused,
session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
RTE_CACHE_LINE_SIZE);
if (session->auth_key.data == NULL && xform->auth.key.length > 0) {
- CAAM_JR_ERR("No Memory for auth key\n");
+ CAAM_JR_ERR("No Memory for auth key");
return -ENOMEM;
}
session->auth_key.length = xform->auth.key.length;
@@ -1602,7 +1602,7 @@ caam_jr_aead_init(struct rte_cryptodev *dev __rte_unused,
session->aead_key.data = rte_zmalloc(NULL, xform->aead.key.length,
RTE_CACHE_LINE_SIZE);
if (session->aead_key.data == NULL && xform->aead.key.length > 0) {
- CAAM_JR_ERR("No Memory for aead key\n");
+ CAAM_JR_ERR("No Memory for aead key");
return -ENOMEM;
}
session->aead_key.length = xform->aead.key.length;
@@ -1755,7 +1755,7 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
RTE_CACHE_LINE_SIZE);
if (session->cipher_key.data == NULL &&
cipher_xform->key.length > 0) {
- CAAM_JR_ERR("No Memory for cipher key\n");
+ CAAM_JR_ERR("No Memory for cipher key");
return -ENOMEM;
}
@@ -1765,7 +1765,7 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
RTE_CACHE_LINE_SIZE);
if (session->auth_key.data == NULL &&
auth_xform->key.length > 0) {
- CAAM_JR_ERR("No Memory for auth key\n");
+ CAAM_JR_ERR("No Memory for auth key");
rte_free(session->cipher_key.data);
return -ENOMEM;
}
@@ -1810,11 +1810,11 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
case RTE_CRYPTO_AUTH_ZUC_EIA3:
- CAAM_JR_ERR("Crypto: Unsupported auth alg %u\n",
+ CAAM_JR_ERR("Crypto: Unsupported auth alg %u",
auth_xform->algo);
goto out;
default:
- CAAM_JR_ERR("Crypto: Undefined Auth specified %u\n",
+ CAAM_JR_ERR("Crypto: Undefined Auth specified %u",
auth_xform->algo);
goto out;
}
@@ -1834,11 +1834,11 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_KASUMI_F8:
- CAAM_JR_ERR("Crypto: Unsupported Cipher alg %u\n",
+ CAAM_JR_ERR("Crypto: Unsupported Cipher alg %u",
cipher_xform->algo);
goto out;
default:
- CAAM_JR_ERR("Crypto: Undefined Cipher specified %u\n",
+ CAAM_JR_ERR("Crypto: Undefined Cipher specified %u",
cipher_xform->algo);
goto out;
}
@@ -1962,7 +1962,7 @@ caam_jr_dev_configure(struct rte_cryptodev *dev,
NULL, NULL, NULL, NULL,
SOCKET_ID_ANY, 0);
if (!internals->ctx_pool) {
- CAAM_JR_ERR("%s create failed\n", str);
+ CAAM_JR_ERR("%s create failed", str);
return -ENOMEM;
}
} else
@@ -2180,7 +2180,7 @@ init_job_ring(void *reg_base_addr, int irq_id)
}
}
if (job_ring == NULL) {
- CAAM_JR_ERR("No free job ring\n");
+ CAAM_JR_ERR("No free job ring");
return NULL;
}
@@ -2301,7 +2301,7 @@ caam_jr_dev_init(const char *name,
job_ring->uio_fd);
if (!dev->data->dev_private) {
- CAAM_JR_ERR("Ring memory allocation failed\n");
+ CAAM_JR_ERR("Ring memory allocation failed");
goto cleanup2;
}
@@ -2334,7 +2334,7 @@ caam_jr_dev_init(const char *name,
security_instance = rte_malloc("caam_jr",
sizeof(struct rte_security_ctx), 0);
if (security_instance == NULL) {
- CAAM_JR_ERR("memory allocation failed\n");
+ CAAM_JR_ERR("memory allocation failed");
//todo error handling.
goto cleanup2;
}
diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c
index 583ba3b523..acb40bdf77 100644
--- a/drivers/crypto/caam_jr/caam_jr_uio.c
+++ b/drivers/crypto/caam_jr/caam_jr_uio.c
@@ -338,7 +338,7 @@ free_job_ring(int uio_fd)
}
if (job_ring == NULL) {
- CAAM_JR_ERR("JR not available for fd = %x\n", uio_fd);
+ CAAM_JR_ERR("JR not available for fd = %x", uio_fd);
return;
}
@@ -378,7 +378,7 @@ uio_job_ring *config_job_ring(void)
}
if (job_ring == NULL) {
- CAAM_JR_ERR("No free job ring\n");
+ CAAM_JR_ERR("No free job ring");
return NULL;
}
@@ -441,7 +441,7 @@ sec_configure(void)
dir->d_name, "name", uio_name);
CAAM_JR_INFO("sec device uio name: %s", uio_name);
if (ret != 0) {
- CAAM_JR_ERR("file_read_first_line failed\n");
+ CAAM_JR_ERR("file_read_first_line failed");
closedir(d);
return -1;
}
diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c
index b7ca3af5a4..6d42b92d8b 100644
--- a/drivers/crypto/ccp/ccp_dev.c
+++ b/drivers/crypto/ccp/ccp_dev.c
@@ -362,7 +362,7 @@ ccp_find_lsb_regions(struct ccp_queue *cmd_q, uint64_t status)
if (ccp_get_bit(&cmd_q->lsbmask, j))
weight++;
- CCP_LOG_DBG("Queue %d can access %d LSB regions of mask %lu\n",
+ CCP_LOG_DBG("Queue %d can access %d LSB regions of mask %lu",
(int)cmd_q->id, weight, cmd_q->lsbmask);
return weight ? 0 : -EINVAL;
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index a5271d7227..c92fdb446d 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -228,7 +228,7 @@ cryptodev_ccp_create(const char *name,
}
cryptodev_cnt++;
- CCP_LOG_DBG("CCP : Crypto device count = %d\n", cryptodev_cnt);
+ CCP_LOG_DBG("CCP : Crypto device count = %d", cryptodev_cnt);
dev->device = &pci_dev->device;
dev->device->driver = &pci_drv->driver;
dev->driver_id = ccp_cryptodev_driver_id;
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c2a807fa94..cf163e0208 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -1952,7 +1952,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
sess->cpt_op |= ROC_SE_OP_AUTH_VERIFY;
} else {
- plt_dp_err("Unknown aead operation\n");
+ plt_dp_err("Unknown aead operation");
return -1;
}
switch (aead_form->algo) {
@@ -2036,7 +2036,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_DECRYPT;
} else {
- plt_dp_err("Unknown cipher operation\n");
+ plt_dp_err("Unknown cipher operation");
return -1;
}
@@ -2113,7 +2113,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
ROC_SE_FC_MINOR_OP_HMAC_FIRST;
}
} else {
- plt_dp_err("Unknown cipher operation\n");
+ plt_dp_err("Unknown cipher operation");
return -1;
}
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 6ae356ace0..b65bea3b3f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1146,7 +1146,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d"
- " data_off: 0x%x\n",
+ " data_off: 0x%x",
data_offset,
data_len,
sess->iv.length,
@@ -1172,7 +1172,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_FIN(sge);
DPAA2_SEC_DP_DEBUG(
- "CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+ "CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d",
flc, fle, fle->addr_hi, fle->addr_lo,
fle->length);
@@ -1212,7 +1212,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: fdaddr =%" PRIx64 " bpid =%d meta =%d"
- " off =%d, len =%d\n",
+ " off =%d, len =%d",
DPAA2_GET_FD_ADDR(fd),
DPAA2_GET_FD_BPID(fd),
rte_dpaa2_bpid_info[bpid].meta_data_size,
@@ -1292,7 +1292,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER: cipher_off: 0x%x/length %d, ivlen=%d,"
- " data_off: 0x%x\n",
+ " data_off: 0x%x",
data_offset,
data_len,
sess->iv.length,
@@ -1303,7 +1303,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle->length = data_len + sess->iv.length;
DPAA2_SEC_DP_DEBUG(
- "CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d\n",
+ "CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
flc, fle, fle->addr_hi, fle->addr_lo,
fle->length);
@@ -1326,7 +1326,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER: fdaddr =%" PRIx64 " bpid =%d meta =%d"
- " off =%d, len =%d\n",
+ " off =%d, len =%d",
DPAA2_GET_FD_ADDR(fd),
DPAA2_GET_FD_BPID(fd),
rte_dpaa2_bpid_info[bpid].meta_data_size,
@@ -1348,12 +1348,12 @@ build_sec_fd(struct rte_crypto_op *op,
} else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
sess = SECURITY_GET_SESS_PRIV(op->sym->session);
} else {
- DPAA2_SEC_DP_ERR("Session type invalid\n");
+ DPAA2_SEC_DP_ERR("Session type invalid");
return -ENOTSUP;
}
if (!sess) {
- DPAA2_SEC_DP_ERR("Session not available\n");
+ DPAA2_SEC_DP_ERR("Session not available");
return -EINVAL;
}
@@ -1446,7 +1446,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1475,7 +1475,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
bpid = mempool_to_bpid(mb_pool);
ret = build_sec_fd(*ops, &fd_arr[loop], bpid, dpaa2_qp);
if (ret) {
- DPAA2_SEC_DP_DEBUG("FD build failed\n");
+ DPAA2_SEC_DP_DEBUG("FD build failed");
goto skip_tx;
}
ops++;
@@ -1493,7 +1493,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
num_tx += loop;
nb_ops -= loop;
- DPAA2_SEC_DP_DEBUG("Enqueue fail\n");
+ DPAA2_SEC_DP_DEBUG("Enqueue fail");
/* freeing the fle buffers */
while (loop < frames_to_send) {
free_fle(&fd_arr[loop],
@@ -1569,7 +1569,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
- DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+ DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x",
fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
/* we are using the first FLE entry to store Mbuf.
@@ -1602,7 +1602,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
}
DPAA2_SEC_DP_DEBUG("mbuf %p BMAN buf addr %p,"
- " fdaddr =%" PRIx64 " bpid =%d meta =%d off =%d, len =%d\n",
+ " fdaddr =%" PRIx64 " bpid =%d meta =%d off =%d, len =%d",
(void *)dst,
dst->buf_addr,
DPAA2_GET_FD_ADDR(fd),
@@ -1824,7 +1824,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
bpid = mempool_to_bpid(mb_pool);
ret = build_sec_fd(*ops, &fd_arr[loop], bpid, dpaa2_qp);
if (ret) {
- DPAA2_SEC_DP_DEBUG("FD build failed\n");
+ DPAA2_SEC_DP_DEBUG("FD build failed");
goto skip_tx;
}
ops++;
@@ -1841,7 +1841,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
num_tx += loop;
nb_ops -= loop;
- DPAA2_SEC_DP_DEBUG("Enqueue fail\n");
+ DPAA2_SEC_DP_DEBUG("Enqueue fail");
/* freeing the fle buffers */
while (loop < frames_to_send) {
free_fle(&fd_arr[loop],
@@ -1884,7 +1884,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1937,7 +1937,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
status = (uint8_t)qbman_result_DQ_flags(dq_storage);
if (unlikely(
(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
- DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+ DPAA2_SEC_DP_DEBUG("No frame is delivered");
continue;
}
}
@@ -1948,7 +1948,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
if (unlikely(fd->simple.frc)) {
/* TODO Parse SEC errors */
if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_NO_DUMP) {
- DPAA2_SEC_DP_ERR("SEC returned Error - %x\n",
+ DPAA2_SEC_DP_ERR("SEC returned Error - %x",
fd->simple.frc);
if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_ERR_DUMP)
dpaa2_sec_dump(ops[num_rx]);
@@ -1966,7 +1966,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
dpaa2_qp->rx_vq.rx_pkts += num_rx;
- DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64 "\n", num_rx,
+ DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64, num_rx,
dpaa2_qp->rx_vq.err_pkts);
/*Return the total number of packets received to DPAA2 app*/
return num_rx;
@@ -2555,7 +2555,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
#ifdef CAAM_DESC_DEBUG
int i;
for (i = 0; i < bufsize; i++)
- DPAA2_SEC_DEBUG("DESC[%d]:0x%x\n",
+ DPAA2_SEC_DEBUG("DESC[%d]:0x%x",
i, priv->flc_desc[0].desc[i]);
#endif
return ret;
@@ -4275,7 +4275,7 @@ check_devargs_handler(const char *key, const char *value,
if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_FULL_DUMP) {
DPAA2_SEC_WARN("WARN: DPAA2_SEC_DP_DUMP_LEVEL is not "
"supported, changing to FULL error"
- " prints\n");
+ " prints");
dpaa2_sec_dp_dump = DPAA2_SEC_DP_FULL_DUMP;
}
} else
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 4754b9d6f8..883584a6e2 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -605,7 +605,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
flc = &priv->flc_desc[0].flc;
DPAA2_SEC_DP_DEBUG(
- "RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d\n",
+ "RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d",
data_offset,
data_len,
sess->iv.length);
@@ -642,7 +642,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
DPAA2_SET_FLE_FIN(sge);
DPAA2_SEC_DP_DEBUG(
- "RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+ "RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d",
flc, fle, fle->addr_hi, fle->addr_lo,
fle->length);
@@ -678,7 +678,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SEC_DP_DEBUG(
- "RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d\n",
+ "RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d",
DPAA2_GET_FD_ADDR(fd),
DPAA2_GET_FD_OFFSET(fd),
DPAA2_GET_FD_LEN(fd));
@@ -721,7 +721,7 @@ dpaa2_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -811,7 +811,7 @@ sec_fd_to_userdata(const struct qbman_fd *fd)
void *userdata;
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
- DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+ DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x",
fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
userdata = (struct rte_crypto_op *)DPAA2_GET_FLE_ADDR((fle - 1));
/* free the fle memory */
@@ -847,7 +847,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -900,7 +900,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
status = (uint8_t)qbman_result_DQ_flags(dq_storage);
if (unlikely(
(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
- DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+ DPAA2_SEC_DP_DEBUG("No frame is delivered");
continue;
}
}
@@ -929,7 +929,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
*dequeue_status = 1;
*n_success = num_rx;
- DPAA2_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+ DPAA2_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
/*Return the total number of packets received to DPAA2 app*/
return num_rx;
}
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 906ea39047..131cd90c94 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -102,7 +102,7 @@ ern_sec_fq_handler(struct qman_portal *qm __rte_unused,
struct qman_fq *fq,
const struct qm_mr_entry *msg)
{
- DPAA_SEC_DP_ERR("sec fq %d error, RC = %x, seqnum = %x\n",
+ DPAA_SEC_DP_ERR("sec fq %d error, RC = %x, seqnum = %x",
fq->fqid, msg->ern.rc, msg->ern.seqnum);
}
@@ -849,7 +849,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
} else {
if (dpaa_sec_dp_dump > DPAA_SEC_DP_NO_DUMP) {
- DPAA_SEC_DP_WARN("SEC return err:0x%x\n",
+ DPAA_SEC_DP_WARN("SEC return err:0x%x",
ctx->fd_status);
if (dpaa_sec_dp_dump > DPAA_SEC_DP_ERR_DUMP)
dpaa_sec_dump(ctx, qp);
@@ -1944,7 +1944,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
} else if (unlikely(ses->qp[rte_lcore_id() %
MAX_DPAA_CORES] != qp)) {
DPAA_SEC_DP_ERR("Old:sess->qp = %p"
- " New qp = %p\n",
+ " New qp = %p",
ses->qp[rte_lcore_id() %
MAX_DPAA_CORES], qp);
frames_to_send = loop;
@@ -2054,7 +2054,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
fd->cmd = 0x80000000 |
*((uint32_t *)((uint8_t *)op +
ses->pdcp.hfn_ovd_offset));
- DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u",
*((uint32_t *)((uint8_t *)op +
ses->pdcp.hfn_ovd_offset)),
ses->pdcp.hfn_ovd);
@@ -2095,7 +2095,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
dpaa_qp->rx_pkts += num_rx;
dpaa_qp->rx_errs += nb_ops - num_rx;
- DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+ DPAA_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
return num_rx;
}
@@ -2158,7 +2158,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
NULL, NULL, NULL, NULL,
SOCKET_ID_ANY, 0);
if (!qp->ctx_pool) {
- DPAA_SEC_ERR("%s create failed\n", str);
+ DPAA_SEC_ERR("%s create failed", str);
return -ENOMEM;
}
} else
@@ -2459,7 +2459,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
session->aead_key.data = rte_zmalloc(NULL, xform->aead.key.length,
RTE_CACHE_LINE_SIZE);
if (session->aead_key.data == NULL && xform->aead.key.length > 0) {
- DPAA_SEC_ERR("No Memory for aead key\n");
+ DPAA_SEC_ERR("No Memory for aead key");
return -ENOMEM;
}
session->aead_key.length = xform->aead.key.length;
@@ -2508,7 +2508,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
for (i = 0; i < RTE_DPAA_MAX_RX_QUEUE; i++) {
if (&qi->inq[i] == fq) {
if (qman_retire_fq(fq, NULL) != 0)
- DPAA_SEC_DEBUG("Queue is not retired\n");
+ DPAA_SEC_DEBUG("Queue is not retired");
qman_oos_fq(fq);
qi->inq_attach[i] = 0;
return 0;
@@ -3483,7 +3483,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
qp->outq.cb.dqrr_dpdk_cb = dpaa_sec_process_atomic_event;
break;
case RTE_SCHED_TYPE_ORDERED:
- DPAA_SEC_ERR("Ordered queue schedule type is not supported\n");
+ DPAA_SEC_ERR("Ordered queue schedule type is not supported");
return -ENOTSUP;
default:
opts.fqd.fq_ctrl |= QM_FQCTRL_AVOIDBLOCK;
@@ -3582,7 +3582,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
dpaa_sec_dp_dump = atoi(value);
if (dpaa_sec_dp_dump > DPAA_SEC_DP_FULL_DUMP) {
DPAA_SEC_WARN("WARN: DPAA_SEC_DP_DUMP_LEVEL is not "
- "supported, changing to FULL error prints\n");
+ "supported, changing to FULL error prints");
dpaa_sec_dp_dump = DPAA_SEC_DP_FULL_DUMP;
}
@@ -3645,7 +3645,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
ret = munmap(internals->sec_hw, MAP_SIZE);
if (ret)
- DPAA_SEC_WARN("munmap failed\n");
+ DPAA_SEC_WARN("munmap failed");
close(map_fd);
cryptodev->driver_id = dpaa_cryptodev_driver_id;
@@ -3713,7 +3713,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
return 0;
init_error:
- DPAA_SEC_ERR("driver %s: create failed\n", cryptodev->data->name);
+ DPAA_SEC_ERR("driver %s: create failed", cryptodev->data->name);
rte_free(cryptodev->security_ctx);
return -EFAULT;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_log.h b/drivers/crypto/dpaa_sec/dpaa_sec_log.h
index fb895a8bc6..82ac1fa1c4 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_log.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_log.h
@@ -29,7 +29,7 @@ extern int dpaa_logtype_sec;
/* DP Logs, toggled out at compile time if level lower than current level */
#define DPAA_SEC_DP_LOG(level, fmt, args...) \
- RTE_LOG_DP(level, PMD, fmt, ## args)
+ RTE_LOG_DP_LINE(level, PMD, fmt, ## args)
#define DPAA_SEC_DP_DEBUG(fmt, args...) \
DPAA_SEC_DP_LOG(DEBUG, fmt, ## args)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index ce49c4996f..f62c803894 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -761,7 +761,7 @@ build_dpaa_raw_proto_sg(uint8_t *drv_ctx,
fd->cmd = 0x80000000 |
*((uint32_t *)((uint8_t *)userdata +
ses->pdcp.hfn_ovd_offset));
- DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u",
*((uint32_t *)((uint8_t *)userdata +
ses->pdcp.hfn_ovd_offset)),
ses->pdcp.hfn_ovd);
@@ -806,7 +806,7 @@ dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
} else if (unlikely(ses->qp[rte_lcore_id() %
MAX_DPAA_CORES] != dpaa_qp)) {
DPAA_SEC_DP_ERR("Old:sess->qp = %p"
- " New qp = %p\n",
+ " New qp = %p",
ses->qp[rte_lcore_id() %
MAX_DPAA_CORES], dpaa_qp);
frames_to_send = loop;
@@ -955,7 +955,7 @@ dpaa_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
*dequeue_status = 1;
*n_success = num_rx;
- DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+ DPAA_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
return num_rx;
}
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.c b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
index f485d130b6..0d2538832d 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
@@ -165,7 +165,7 @@ ipsec_mb_create(struct rte_vdev_device *vdev,
rte_cryptodev_pmd_probing_finish(dev);
- IPSEC_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s\n",
+ IPSEC_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s",
imb_get_version_str());
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
@@ -176,7 +176,7 @@ ipsec_mb_create(struct rte_vdev_device *vdev,
if (retval)
IPSEC_MB_LOG(ERR,
- "IPSec Multi-buffer register MP request failed.\n");
+ "IPSec Multi-buffer register MP request failed.");
}
return retval;
}
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
index 52722f94a0..252bcb3192 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
@@ -198,7 +198,7 @@ alloc_init_mb_mgr(void)
IMB_MGR *mb_mgr = alloc_mb_mgr(0);
if (unlikely(mb_mgr == NULL)) {
- IPSEC_MB_LOG(ERR, "Failed to allocate IMB_MGR data\n");
+ IPSEC_MB_LOG(ERR, "Failed to allocate IMB_MGR data");
return NULL;
}
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 80de25c65b..8e74645e0a 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -107,7 +107,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
uint16_t xcbc_mac_digest_len =
get_truncated_digest_byte_length(IMB_AUTH_AES_XCBC);
if (sess->auth.req_digest_len != xcbc_mac_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -130,7 +130,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
get_digest_byte_length(IMB_AUTH_AES_CMAC);
if (sess->auth.req_digest_len > cmac_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
/*
@@ -165,7 +165,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
if (sess->auth.req_digest_len >
get_digest_byte_length(IMB_AUTH_AES_GMAC)) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -192,7 +192,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
sess->template_job.key_len_in_bytes = IMB_KEY_256_BYTES;
break;
default:
- IPSEC_MB_LOG(ERR, "Invalid authentication key length\n");
+ IPSEC_MB_LOG(ERR, "Invalid authentication key length");
return -EINVAL;
}
sess->template_job.u.GMAC._key = &sess->cipher.gcm_key;
@@ -205,7 +205,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
sess->template_job.hash_alg = IMB_AUTH_ZUC_EIA3_BITLEN;
if (sess->auth.req_digest_len != 4) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
} else if (xform->auth.key.length == 32) {
@@ -217,11 +217,11 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
#else
if (sess->auth.req_digest_len != 4) {
#endif
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
} else {
- IPSEC_MB_LOG(ERR, "Invalid authentication key length\n");
+ IPSEC_MB_LOG(ERR, "Invalid authentication key length");
return -EINVAL;
}
@@ -237,7 +237,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
get_truncated_digest_byte_length(
IMB_AUTH_SNOW3G_UIA2_BITLEN);
if (sess->auth.req_digest_len != snow3g_uia2_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -252,7 +252,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
uint16_t kasumi_f9_digest_len =
get_truncated_digest_byte_length(IMB_AUTH_KASUMI_UIA1);
if (sess->auth.req_digest_len != kasumi_f9_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -361,7 +361,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
if (sess->auth.req_digest_len > full_digest_size ||
sess->auth.req_digest_len == 0) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
@@ -691,7 +691,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
if (sess->auth.req_digest_len < AES_CCM_DIGEST_MIN_LEN ||
sess->auth.req_digest_len > AES_CCM_DIGEST_MAX_LEN ||
(sess->auth.req_digest_len & 1) == 1) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
break;
@@ -727,7 +727,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
/* GCM digest size must be between 1 and 16 */
if (sess->auth.req_digest_len == 0 ||
sess->auth.req_digest_len > 16) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
break;
@@ -748,7 +748,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
sess->template_job.enc_keys = sess->cipher.expanded_aes_keys.encode;
sess->template_job.dec_keys = sess->cipher.expanded_aes_keys.decode;
if (sess->auth.req_digest_len != 16) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
break;
@@ -1200,7 +1200,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
total_len = sgl_linear_cipher_auth_len(job, &auth_len);
linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
if (linear_buf == NULL) {
- IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer");
return -1;
}
diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index e64df1a462..a0b354bb83 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -186,7 +186,7 @@ process_snow3g_cipher_op_bit(struct ipsec_mb_qp *qp,
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
if (op->sym->m_dst == NULL) {
op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "bit-level in-place not supported\n");
+ IPSEC_MB_LOG(ERR, "bit-level in-place not supported");
return 0;
}
length_in_bits = op->sym->cipher.data.length;
@@ -317,7 +317,7 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
IPSEC_MB_LOG(ERR,
"PMD supports only contiguous mbufs, "
"op (%p) provides noncontiguous mbuf as "
- "source/destination buffer.\n", ops[i]);
+ "source/destination buffer.", ops[i]);
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
return 0;
}
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index 4647d568de..aa2363ef15 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -211,7 +211,7 @@ otx_cpt_ring_dbell(struct cpt_instance *instance, uint16_t count)
static __rte_always_inline void *
get_cpt_inst(struct command_queue *cqueue)
{
- CPT_LOG_DP_DEBUG("CPT queue idx %u\n", cqueue->idx);
+ CPT_LOG_DP_DEBUG("CPT queue idx %u", cqueue->idx);
return &cqueue->qhead[cqueue->idx * CPT_INST_SIZE];
}
@@ -305,9 +305,9 @@ complete:
" error, MC completion code : 0x%x", user_req,
ret);
}
- CPT_LOG_DP_DEBUG("MC status %.8x\n",
+ CPT_LOG_DP_DEBUG("MC status %.8x",
*((volatile uint32_t *)user_req->alternate_caddr));
- CPT_LOG_DP_DEBUG("HW status %.8x\n",
+ CPT_LOG_DP_DEBUG("HW status %.8x",
*((volatile uint32_t *)user_req->completion_addr));
} else if ((cptres->s8x.compcode == CPT_8X_COMP_E_SWERR) ||
(cptres->s8x.compcode == CPT_8X_COMP_E_FAULT)) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 101111e85b..e10a172f46 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -57,13 +57,13 @@ static void ossl_legacy_provider_load(void)
/* Load Multiple providers into the default (NULL) library context */
legacy = OSSL_PROVIDER_load(NULL, "legacy");
if (legacy == NULL) {
- OPENSSL_LOG(ERR, "Failed to load Legacy provider\n");
+ OPENSSL_LOG(ERR, "Failed to load Legacy provider");
return;
}
deflt = OSSL_PROVIDER_load(NULL, "default");
if (deflt == NULL) {
- OPENSSL_LOG(ERR, "Failed to load Default provider\n");
+ OPENSSL_LOG(ERR, "Failed to load Default provider");
OSSL_PROVIDER_unload(legacy);
return;
}
@@ -2123,7 +2123,7 @@ process_openssl_dsa_sign_op_evp(struct rte_crypto_op *cop,
dsa_sign_data_p = (const unsigned char *)dsa_sign_data;
DSA_SIG *sign = d2i_DSA_SIG(NULL, &dsa_sign_data_p, outlen);
if (!sign) {
- OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, "%s:%d", __func__, __LINE__);
OPENSSL_free(dsa_sign_data);
goto err_dsa_sign;
} else {
@@ -2168,7 +2168,7 @@ process_openssl_dsa_verify_op_evp(struct rte_crypto_op *cop,
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
if (!param_bld) {
- OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, " %s:%d", __func__, __LINE__);
return -1;
}
@@ -2246,7 +2246,7 @@ process_openssl_dsa_sign_op(struct rte_crypto_op *cop,
dsa);
if (sign == NULL) {
- OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, "%s:%d", __func__, __LINE__);
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
} else {
const BIGNUM *r = NULL, *s = NULL;
@@ -2275,7 +2275,7 @@ process_openssl_dsa_verify_op(struct rte_crypto_op *cop,
BIGNUM *pub_key = NULL;
if (sign == NULL) {
- OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, " %s:%d", __func__, __LINE__);
cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
return -1;
}
@@ -2352,7 +2352,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
if (!OSSL_PARAM_BLD_push_BN(param_bld_peer, OSSL_PKEY_PARAM_PUB_KEY,
pub_key)) {
- OPENSSL_LOG(ERR, "Failed to set public key\n");
+ OPENSSL_LOG(ERR, "Failed to set public key");
OSSL_PARAM_BLD_free(param_bld_peer);
BN_free(pub_key);
return ret;
@@ -2397,7 +2397,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
if (!OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
priv_key)) {
- OPENSSL_LOG(ERR, "Failed to set private key\n");
+ OPENSSL_LOG(ERR, "Failed to set private key");
EVP_PKEY_CTX_free(peer_ctx);
OSSL_PARAM_free(params_peer);
BN_free(pub_key);
@@ -2423,7 +2423,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
goto err_dh;
if (op->ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE) {
- OPENSSL_LOG(DEBUG, "%s:%d updated pub key\n", __func__, __LINE__);
+ OPENSSL_LOG(DEBUG, "%s:%d updated pub key", __func__, __LINE__);
if (!EVP_PKEY_get_bn_param(dhpkey, OSSL_PKEY_PARAM_PUB_KEY, &pub_key))
goto err_dh;
/* output public key */
@@ -2432,7 +2432,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
if (op->ke_type == RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE) {
- OPENSSL_LOG(DEBUG, "%s:%d updated priv key\n", __func__, __LINE__);
+ OPENSSL_LOG(DEBUG, "%s:%d updated priv key", __func__, __LINE__);
if (!EVP_PKEY_get_bn_param(dhpkey, OSSL_PKEY_PARAM_PRIV_KEY, &priv_key))
goto err_dh;
@@ -2527,7 +2527,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
}
ret = set_dh_priv_key(dh_key, priv_key);
if (ret) {
- OPENSSL_LOG(ERR, "Failed to set private key\n");
+ OPENSSL_LOG(ERR, "Failed to set private key");
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
BN_free(peer_key);
BN_free(priv_key);
@@ -2574,7 +2574,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
}
ret = set_dh_priv_key(dh_key, priv_key);
if (ret) {
- OPENSSL_LOG(ERR, "Failed to set private key\n");
+ OPENSSL_LOG(ERR, "Failed to set private key");
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
BN_free(priv_key);
return 0;
@@ -2596,7 +2596,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
if (asym_op->dh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE) {
const BIGNUM *pub_key = NULL;
- OPENSSL_LOG(DEBUG, "%s:%d update public key\n",
+ OPENSSL_LOG(DEBUG, "%s:%d update public key",
__func__, __LINE__);
/* get the generated keys */
@@ -2610,7 +2610,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
if (asym_op->dh.ke_type == RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE) {
const BIGNUM *priv_key = NULL;
- OPENSSL_LOG(DEBUG, "%s:%d updated priv key\n",
+ OPENSSL_LOG(DEBUG, "%s:%d updated priv key",
__func__, __LINE__);
/* get the generated keys */
@@ -2719,7 +2719,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
default:
cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
OPENSSL_LOG(ERR,
- "rsa pad type not supported %d\n", pad);
+ "rsa pad type not supported %d", pad);
return ret;
}
@@ -2746,7 +2746,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
op->rsa.cipher.length = outlen;
OPENSSL_LOG(DEBUG,
- "length of encrypted text %zu\n", outlen);
+ "length of encrypted text %zu", outlen);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
@@ -2770,7 +2770,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
goto err_rsa;
op->rsa.message.length = outlen;
- OPENSSL_LOG(DEBUG, "length of decrypted text %zu\n", outlen);
+ OPENSSL_LOG(DEBUG, "length of decrypted text %zu", outlen);
break;
case RTE_CRYPTO_ASYM_OP_SIGN:
@@ -2825,7 +2825,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
OPENSSL_LOG(DEBUG,
"Length of public_decrypt %zu "
- "length of message %zd\n",
+ "length of message %zd",
outlen, op->rsa.message.length);
if (CRYPTO_memcmp(tmp, op->rsa.message.data,
op->rsa.message.length)) {
@@ -3097,7 +3097,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
default:
cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
OPENSSL_LOG(ERR,
- "rsa pad type not supported %d\n", pad);
+ "rsa pad type not supported %d", pad);
return 0;
}
@@ -3112,7 +3112,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
if (ret > 0)
op->rsa.cipher.length = ret;
OPENSSL_LOG(DEBUG,
- "length of encrypted text %d\n", ret);
+ "length of encrypted text %d", ret);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
@@ -3150,7 +3150,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
OPENSSL_LOG(DEBUG,
"Length of public_decrypt %d "
- "length of message %zd\n",
+ "length of message %zd",
ret, op->rsa.message.length);
if ((ret <= 0) || (CRYPTO_memcmp(tmp, op->rsa.message.data,
op->rsa.message.length))) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 1bbb855a59..b7b612fc57 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -892,7 +892,7 @@ static int openssl_set_asym_session_parameters(
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
OSSL_PARAM_BLD * param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_rsa;
}
@@ -900,7 +900,7 @@ static int openssl_set_asym_session_parameters(
|| !OSSL_PARAM_BLD_push_BN(param_bld,
OSSL_PKEY_PARAM_RSA_E, e)) {
OSSL_PARAM_BLD_free(param_bld);
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_rsa;
}
@@ -1033,14 +1033,14 @@ static int openssl_set_asym_session_parameters(
ret = set_rsa_params(rsa, p, q);
if (ret) {
OPENSSL_LOG(ERR,
- "failed to set rsa params\n");
+ "failed to set rsa params");
RSA_free(rsa);
goto err_rsa;
}
ret = set_rsa_crt_params(rsa, dmp1, dmq1, iqmp);
if (ret) {
OPENSSL_LOG(ERR,
- "failed to set crt params\n");
+ "failed to set crt params");
RSA_free(rsa);
/*
* set already populated params to NULL
@@ -1053,7 +1053,7 @@ static int openssl_set_asym_session_parameters(
ret = set_rsa_keys(rsa, n, e, d);
if (ret) {
- OPENSSL_LOG(ERR, "Failed to load rsa keys\n");
+ OPENSSL_LOG(ERR, "Failed to load rsa keys");
RSA_free(rsa);
return ret;
}
@@ -1080,7 +1080,7 @@ err_rsa:
BN_CTX *ctx = BN_CTX_new();
if (ctx == NULL) {
OPENSSL_LOG(ERR,
- " failed to allocate resources\n");
+ " failed to allocate resources");
return ret;
}
BN_CTX_start(ctx);
@@ -1111,7 +1111,7 @@ err_rsa:
BN_CTX *ctx = BN_CTX_new();
if (ctx == NULL) {
OPENSSL_LOG(ERR,
- " failed to allocate resources\n");
+ " failed to allocate resources");
return ret;
}
BN_CTX_start(ctx);
@@ -1152,7 +1152,7 @@ err_rsa:
OSSL_PARAM_BLD *param_bld = NULL;
param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_dh;
}
if ((!OSSL_PARAM_BLD_push_utf8_string(param_bld,
@@ -1168,7 +1168,7 @@ err_rsa:
OSSL_PARAM_BLD *param_bld_peer = NULL;
param_bld_peer = OSSL_PARAM_BLD_new();
if (!param_bld_peer) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
OSSL_PARAM_BLD_free(param_bld);
goto err_dh;
}
@@ -1203,7 +1203,7 @@ err_rsa:
dh = DH_new();
if (dh == NULL) {
OPENSSL_LOG(ERR,
- "failed to allocate resources\n");
+ "failed to allocate resources");
goto err_dh;
}
ret = set_dh_params(dh, p, g);
@@ -1217,7 +1217,7 @@ err_rsa:
break;
err_dh:
- OPENSSL_LOG(ERR, " failed to set dh params\n");
+ OPENSSL_LOG(ERR, " failed to set dh params");
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
BN_free(*p);
BN_free(*g);
@@ -1263,7 +1263,7 @@ err_dh:
param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_dsa;
}
@@ -1273,7 +1273,7 @@ err_dh:
|| !OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
*priv_key)) {
OSSL_PARAM_BLD_free(param_bld);
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_dsa;
}
asym_session->xfrm_type = RTE_CRYPTO_ASYM_XFORM_DSA;
@@ -1313,14 +1313,14 @@ err_dh:
DSA *dsa = DSA_new();
if (dsa == NULL) {
OPENSSL_LOG(ERR,
- " failed to allocate resources\n");
+ " failed to allocate resources");
goto err_dsa;
}
ret = set_dsa_params(dsa, p, q, g);
if (ret) {
DSA_free(dsa);
- OPENSSL_LOG(ERR, "Failed to dsa params\n");
+ OPENSSL_LOG(ERR, "Failed to dsa params");
goto err_dsa;
}
@@ -1334,7 +1334,7 @@ err_dh:
ret = set_dsa_keys(dsa, pub_key, priv_key);
if (ret) {
DSA_free(dsa);
- OPENSSL_LOG(ERR, "Failed to set keys\n");
+ OPENSSL_LOG(ERR, "Failed to set keys");
goto err_dsa;
}
asym_session->u.s.dsa = dsa;
@@ -1369,21 +1369,21 @@ err_dsa:
param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate params\n");
+ OPENSSL_LOG(ERR, "failed to allocate params");
goto err_sm2;
}
ret = OSSL_PARAM_BLD_push_utf8_string(param_bld,
OSSL_ASYM_CIPHER_PARAM_DIGEST, "SM3", 0);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
ret = OSSL_PARAM_BLD_push_utf8_string(param_bld,
OSSL_PKEY_PARAM_GROUP_NAME, "SM2", 0);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
@@ -1393,7 +1393,7 @@ err_dsa:
ret = OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
pkey_bn);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
@@ -1408,13 +1408,13 @@ err_dsa:
ret = OSSL_PARAM_BLD_push_octet_string(param_bld,
OSSL_PKEY_PARAM_PUB_KEY, pubkey, len);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
params = OSSL_PARAM_BLD_to_param(param_bld);
if (!params) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 2bf3060278..5d240a3de1 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -1520,7 +1520,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
qat_pci_dev->name, "asym");
- QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
+ QAT_LOG(DEBUG, "Creating QAT ASYM device %s", name);
if (gen_dev_ops->cryptodev_ops == NULL) {
QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 9f4f6c3d93..224cc0ab50 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -569,7 +569,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
ret = -ENOTSUP;
goto error_out;
default:
- QAT_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+ QAT_LOG(ERR, "Crypto: Undefined Cipher specified %u",
cipher_xform->algo);
ret = -EINVAL;
goto error_out;
@@ -1073,7 +1073,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
aead_xform);
break;
default:
- QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u\n",
+ QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u",
aead_xform->algo);
return -EINVAL;
}
@@ -1676,7 +1676,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
err = imb_get_errno(m);
if (err)
- QAT_LOG(ERR, "Error: %s!\n", imb_get_strerror(err));
+ QAT_LOG(ERR, "Error: %s!", imb_get_strerror(err));
return -EFAULT;
}
@@ -2480,10 +2480,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
&state2_size, cdesc->aes_cmac);
#endif
if (ret) {
- cdesc->aes_cmac ? QAT_LOG(ERR,
- "(CMAC)precompute failed")
- : QAT_LOG(ERR,
- "(XCBC)precompute failed");
+ QAT_LOG(ERR, "(%s)precompute failed",
+ cdesc->aes_cmac ? "CMAC" : "XCBC");
return -EFAULT;
}
break;
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c
index 824383512e..e4b1a32398 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -634,7 +634,7 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
setup.sched_param = ¶ms;
sess->handle_cipher = wd_cipher_alloc_sess(&setup);
if (!sess->handle_cipher) {
- UADK_LOG(ERR, "uadk failed to alloc session!\n");
+ UADK_LOG(ERR, "uadk failed to alloc session!");
ret = -EINVAL;
goto env_uninit;
}
@@ -642,7 +642,7 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
ret = wd_cipher_set_key(sess->handle_cipher, cipher->key.data, cipher->key.length);
if (ret) {
wd_cipher_free_sess(sess->handle_cipher);
- UADK_LOG(ERR, "uadk failed to set key!\n");
+ UADK_LOG(ERR, "uadk failed to set key!");
ret = -EINVAL;
goto env_uninit;
}
@@ -734,7 +734,7 @@ uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
setup.sched_param = ¶ms;
sess->handle_digest = wd_digest_alloc_sess(&setup);
if (!sess->handle_digest) {
- UADK_LOG(ERR, "uadk failed to alloc session!\n");
+ UADK_LOG(ERR, "uadk failed to alloc session!");
ret = -EINVAL;
goto env_uninit;
}
@@ -745,7 +745,7 @@ uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
xform->auth.key.data,
xform->auth.key.length);
if (ret) {
- UADK_LOG(ERR, "uadk failed to alloc session!\n");
+ UADK_LOG(ERR, "uadk failed to alloc session!");
wd_digest_free_sess(sess->handle_digest);
sess->handle_digest = 0;
ret = -EINVAL;
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4854820ba6..c0d3178b71 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -591,7 +591,7 @@ virtio_crypto_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
qp_conf->nb_descriptors, socket_id, &vq);
if (ret < 0) {
VIRTIO_CRYPTO_INIT_LOG_ERR(
- "virtio crypto data queue initialization failed\n");
+ "virtio crypto data queue initialization failed");
return ret;
}
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 10e65ef1d7..3d4fd818f8 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -295,7 +295,7 @@ static struct fsl_qdma_queue
for (i = 0; i < queue_num; i++) {
if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
- DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
+ DPAA_QDMA_ERR("Get wrong queue-sizes.");
goto fail;
}
queue_temp = queue_head + i + (j * queue_num);
@@ -345,7 +345,7 @@ fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
status_size = QDMA_STATUS_SIZE;
if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
- DPAA_QDMA_ERR("Get wrong status_size.\n");
+ DPAA_QDMA_ERR("Get wrong status_size.");
return NULL;
}
@@ -643,7 +643,7 @@ fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
if (ret) {
DPAA_QDMA_ERR(
- "failed to alloc dma buffer for comp descriptor\n");
+ "failed to alloc dma buffer for comp descriptor");
goto exit;
}
@@ -779,7 +779,7 @@ dpaa_qdma_enqueue(void *dev_private, uint16_t vchan,
(dma_addr_t)dst, (dma_addr_t)src,
length, NULL, NULL);
if (!fsl_comp) {
- DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+ DPAA_QDMA_DP_DEBUG("fsl_comp is NULL");
return -1;
}
ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
@@ -803,19 +803,19 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
intr = qdma_readl_be(status + FSL_QDMA_DEDR);
if (intr) {
- DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+ DPAA_QDMA_ERR("DMA transaction error! %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECBR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x", intr);
qdma_writel(0xffffffff,
status + FSL_QDMA_DEDR);
intr = qdma_readl(status + FSL_QDMA_DEDR);
@@ -849,19 +849,19 @@ dpaa_qdma_dequeue(void *dev_private,
intr = qdma_readl_be(status + FSL_QDMA_DEDR);
if (intr) {
- DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+ DPAA_QDMA_ERR("DMA transaction error! %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECBR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x", intr);
qdma_writel(0xffffffff,
status + FSL_QDMA_DEDR);
intr = qdma_readl(status + FSL_QDMA_DEDR);
@@ -974,7 +974,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
close(ccsr_qdma_fd);
if (fsl_qdma->ctrl_base == MAP_FAILED) {
DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
- "size %d\n", phys_addr, regs_size);
+ "size %d", phys_addr, regs_size);
goto err;
}
@@ -998,7 +998,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
ret = fsl_qdma_reg_init(fsl_qdma);
if (ret) {
- DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
+ DPAA_QDMA_ERR("Can't Initialize the qDMA engine.");
munmap(fsl_qdma->ctrl_base, regs_size);
goto err;
}
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 2c91ceec13..5780e49297 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -578,7 +578,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_QDMA_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -608,7 +608,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
while (1) {
if (qbman_swp_pull(swp, &pulldesc)) {
DPAA2_QDMA_DP_WARN(
- "VDQ command not issued.QBMAN busy\n");
+ "VDQ command not issued.QBMAN busy");
/* Portal was busy, try again */
continue;
}
@@ -684,7 +684,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
while (1) {
if (qbman_swp_pull(swp, &pulldesc)) {
DPAA2_QDMA_DP_WARN(
- "VDQ command is not issued. QBMAN is busy (2)\n");
+ "VDQ command is not issued. QBMAN is busy (2)");
continue;
}
break;
@@ -728,7 +728,7 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_QDMA_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -825,7 +825,7 @@ dpdmai_dev_submit_multi(struct qdma_virt_queue *qdma_vq,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_QDMA_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c
index 4db3b0554c..8bc076f5d5 100644
--- a/drivers/dma/hisilicon/hisi_dmadev.c
+++ b/drivers/dma/hisilicon/hisi_dmadev.c
@@ -358,7 +358,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
struct hisi_dma_dev *hw = dev->data->dev_private;
if (hw->iomz == NULL) {
- HISI_DMA_ERR(hw, "Vchan was not setup, start fail!\n");
+ HISI_DMA_ERR(hw, "Vchan was not setup, start fail!");
return -EINVAL;
}
@@ -631,7 +631,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
* status array indexed by csq_head. Only error logs
* are used for prompting.
*/
- HISI_DMA_ERR(hw, "invalid csq_head:%u!\n", csq_head);
+ HISI_DMA_ERR(hw, "invalid csq_head:%u!", csq_head);
count = 0;
break;
}
@@ -913,7 +913,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
if (pci_dev->mem_resource[2].addr == NULL) {
- HISI_DMA_LOG(ERR, "%s BAR2 is NULL!\n", name);
+ HISI_DMA_LOG(ERR, "%s BAR2 is NULL!", name);
return -ENODEV;
}
diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c
index 83d53942eb..dc2e8cd432 100644
--- a/drivers/dma/idxd/idxd_common.c
+++ b/drivers/dma/idxd/idxd_common.c
@@ -616,7 +616,7 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
sizeof(idxd->batch_comp_ring[0])) * (idxd->max_batches + 1),
sizeof(idxd->batch_comp_ring[0]), dev->numa_node);
if (idxd->batch_comp_ring == NULL) {
- IDXD_PMD_ERR("Unable to reserve memory for batch data\n");
+ IDXD_PMD_ERR("Unable to reserve memory for batch data");
ret = -ENOMEM;
goto cleanup;
}
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index a78889a7ef..2ee78773bb 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -323,7 +323,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
/* look up queue 0 to get the PCI structure */
snprintf(qname, sizeof(qname), "%s-q0", name);
- IDXD_PMD_INFO("Looking up %s\n", qname);
+ IDXD_PMD_INFO("Looking up %s", qname);
ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
if (ret != 0) {
IDXD_PMD_ERR("Failed to create dmadev %s", name);
@@ -338,7 +338,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
for (qid = 1; qid < max_qid; qid++) {
/* add the queue number to each device name */
snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
- IDXD_PMD_INFO("Looking up %s\n", qname);
+ IDXD_PMD_INFO("Looking up %s", qname);
ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
if (ret != 0) {
IDXD_PMD_ERR("Failed to create dmadev %s", name);
@@ -364,7 +364,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
return ret;
}
if (idxd.u.pci->portals == NULL) {
- IDXD_PMD_ERR("Error, invalid portal assigned during initialization\n");
+ IDXD_PMD_ERR("Error, invalid portal assigned during initialization");
free(idxd.u.pci);
return -EINVAL;
}
diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c
index 5fc14bcf22..8b7ff5652f 100644
--- a/drivers/dma/ioat/ioat_dmadev.c
+++ b/drivers/dma/ioat/ioat_dmadev.c
@@ -156,12 +156,12 @@ ioat_dev_start(struct rte_dma_dev *dev)
ioat->offset = 0;
ioat->failure = 0;
- IOAT_PMD_DEBUG("channel status - %s [0x%"PRIx64"]\n",
+ IOAT_PMD_DEBUG("channel status - %s [0x%"PRIx64"]",
chansts_readable[ioat->status & IOAT_CHANSTS_STATUS],
ioat->status);
if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) {
- IOAT_PMD_WARN("Device HALTED on start, attempting to recover\n");
+ IOAT_PMD_WARN("Device HALTED on start, attempting to recover");
if (__ioat_recover(ioat) != 0) {
IOAT_PMD_ERR("Device couldn't be recovered");
return -1;
@@ -469,7 +469,7 @@ ioat_completed(void *dev_private, uint16_t qid __rte_unused, const uint16_t max_
ioat->failure = ioat->regs->chanerr;
ioat->next_read = read + count + 1;
if (__ioat_recover(ioat) != 0) {
- IOAT_PMD_ERR("Device HALTED and could not be recovered\n");
+ IOAT_PMD_ERR("Device HALTED and could not be recovered");
__dev_dump(dev_private, stdout);
return 0;
}
@@ -515,7 +515,7 @@ ioat_completed_status(void *dev_private, uint16_t qid __rte_unused,
count++;
ioat->next_read = read + count;
if (__ioat_recover(ioat) != 0) {
- IOAT_PMD_ERR("Device HALTED and could not be recovered\n");
+ IOAT_PMD_ERR("Device HALTED and could not be recovered");
__dev_dump(dev_private, stdout);
return 0;
}
@@ -652,12 +652,12 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)
/* Do device initialization - reset and set error behaviour. */
if (ioat->regs->chancnt != 1)
- IOAT_PMD_WARN("%s: Channel count == %d\n", __func__,
+ IOAT_PMD_WARN("%s: Channel count == %d", __func__,
ioat->regs->chancnt);
/* Locked by someone else. */
if (ioat->regs->chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE) {
- IOAT_PMD_WARN("%s: Channel appears locked\n", __func__);
+ IOAT_PMD_WARN("%s: Channel appears locked", __func__);
ioat->regs->chanctrl = 0;
}
@@ -676,7 +676,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)
rte_delay_ms(1);
if (++retry >= 200) {
IOAT_PMD_ERR("%s: cannot reset device. CHANCMD=%#"PRIx8
- ", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32"\n",
+ ", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32,
__func__,
ioat->regs->chancmd,
ioat->regs->chansts,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6d59fdf909..bba70646fa 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -268,7 +268,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
sso_set_priv_mem_fn(dev->event_dev, NULL);
plt_tim_dbg(
- "Total memory used %" PRIu64 "MB\n",
+ "Total memory used %" PRIu64 "MB",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
(tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
BIT_ULL(20)));
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 5044cb17ef..9dc5edb3fb 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -168,7 +168,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
ret = dlb2_iface_get_num_resources(handle,
&dlb2->hw_rsrc_query_results);
if (ret) {
- DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d\n", ret);
+ DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d", ret);
return ret;
}
@@ -256,7 +256,7 @@ set_producer_coremask(const char *key __rte_unused,
const char **mask_str = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -290,7 +290,7 @@ set_max_cq_depth(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -301,7 +301,7 @@ set_max_cq_depth(const char *key __rte_unused,
if (*max_cq_depth < DLB2_MIN_CQ_DEPTH_OVERRIDE ||
*max_cq_depth > DLB2_MAX_CQ_DEPTH_OVERRIDE ||
!rte_is_power_of_2(*max_cq_depth)) {
- DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2\n",
+ DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2",
DLB2_MIN_CQ_DEPTH_OVERRIDE,
DLB2_MAX_CQ_DEPTH_OVERRIDE);
return -EINVAL;
@@ -319,7 +319,7 @@ set_max_enq_depth(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -330,7 +330,7 @@ set_max_enq_depth(const char *key __rte_unused,
if (*max_enq_depth < DLB2_MIN_ENQ_DEPTH_OVERRIDE ||
*max_enq_depth > DLB2_MAX_ENQ_DEPTH_OVERRIDE ||
!rte_is_power_of_2(*max_enq_depth)) {
- DLB2_LOG_ERR("dlb2: max_enq_depth %d and %d and a power of 2\n",
+ DLB2_LOG_ERR("dlb2: max_enq_depth %d and %d and a power of 2",
DLB2_MIN_ENQ_DEPTH_OVERRIDE,
DLB2_MAX_ENQ_DEPTH_OVERRIDE);
return -EINVAL;
@@ -348,7 +348,7 @@ set_max_num_events(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -358,7 +358,7 @@ set_max_num_events(const char *key __rte_unused,
if (*max_num_events < 0 || *max_num_events >
DLB2_MAX_NUM_LDB_CREDITS) {
- DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d\n",
+ DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d",
DLB2_MAX_NUM_LDB_CREDITS);
return -EINVAL;
}
@@ -375,7 +375,7 @@ set_num_dir_credits(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -385,7 +385,7 @@ set_num_dir_credits(const char *key __rte_unused,
if (*num_dir_credits < 0 ||
*num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
- DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
+ DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d",
DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
return -EINVAL;
}
@@ -402,7 +402,7 @@ set_dev_id(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -422,7 +422,7 @@ set_poll_interval(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -442,7 +442,7 @@ set_port_cos(const char *key __rte_unused,
int first, last, cos_id, i;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -455,18 +455,18 @@ set_port_cos(const char *key __rte_unused,
} else if (sscanf(value, "%d:%d", &first, &cos_id) == 2) {
last = first;
} else {
- DLB2_LOG_ERR("Error parsing ldb port port_cos devarg. Should be port-port:val, or port:val\n");
+ DLB2_LOG_ERR("Error parsing ldb port port_cos devarg. Should be port-port:val, or port:val");
return -EINVAL;
}
if (first > last || first < 0 ||
last >= DLB2_MAX_NUM_LDB_PORTS) {
- DLB2_LOG_ERR("Error parsing ldb port cos_id arg, invalid port value\n");
+ DLB2_LOG_ERR("Error parsing ldb port cos_id arg, invalid port value");
return -EINVAL;
}
if (cos_id < DLB2_COS_0 || cos_id > DLB2_COS_3) {
- DLB2_LOG_ERR("Error parsing ldb port cos_id devarg, must be between 0 and 4\n");
+ DLB2_LOG_ERR("Error parsing ldb port cos_id devarg, must be between 0 and 4");
return -EINVAL;
}
@@ -484,7 +484,7 @@ set_cos_bw(const char *key __rte_unused,
struct dlb2_cos_bw *cos_bw = opaque;
if (opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -492,11 +492,11 @@ set_cos_bw(const char *key __rte_unused,
if (sscanf(value, "%d:%d:%d:%d", &cos_bw->val[0], &cos_bw->val[1],
&cos_bw->val[2], &cos_bw->val[3]) != 4) {
- DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100\n");
+ DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100");
return -EINVAL;
}
if (cos_bw->val[0] + cos_bw->val[1] + cos_bw->val[2] + cos_bw->val[3] > 100) {
- DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100\n");
+ DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100");
return -EINVAL;
}
@@ -512,7 +512,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -521,7 +521,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
return ret;
if (*sw_credit_quanta <= 0) {
- DLB2_LOG_ERR("sw_credit_quanta must be > 0\n");
+ DLB2_LOG_ERR("sw_credit_quanta must be > 0");
return -EINVAL;
}
@@ -537,7 +537,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -557,7 +557,7 @@ set_default_depth_thresh(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -576,7 +576,7 @@ set_vector_opts_enab(const char *key __rte_unused,
bool *dlb2_vector_opts_enabled = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -596,7 +596,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
bool *default_ldb_port_allocation = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -616,7 +616,7 @@ set_enable_cq_weight(const char *key __rte_unused,
bool *enable_cq_weight = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -637,7 +637,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
int first, last, thresh, i;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -654,18 +654,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
last = first;
} else {
- DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val");
return -EINVAL;
}
if (first > last || first < 0 ||
last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value");
return -EINVAL;
}
if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+ DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d",
DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
return -EINVAL;
}
@@ -685,7 +685,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
int first, last, thresh, i;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -702,18 +702,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
last = first;
} else {
- DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val");
return -EINVAL;
}
if (first > last || first < 0 ||
last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value");
return -EINVAL;
}
if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+ DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d",
DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
return -EINVAL;
}
@@ -735,7 +735,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
if (ret) {
const struct rte_eventdev_data *data = dev->data;
- DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
+ DLB2_LOG_ERR("get resources err=%d, devid=%d",
ret, data->dev_id);
/* fn is void, so fall through and return values set up in
* probe
@@ -778,7 +778,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
struct dlb2_create_sched_domain_args *cfg;
if (resources_asked == NULL) {
- DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter\n");
+ DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter");
ret = EINVAL;
goto error_exit;
}
@@ -806,7 +806,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
if (cos_ports > resources_asked->num_ldb_ports ||
(cos_ports && dlb2->max_cos_port >= resources_asked->num_ldb_ports)) {
- DLB2_LOG_ERR("dlb2: num_ldb_ports < cos_ports\n");
+ DLB2_LOG_ERR("dlb2: num_ldb_ports < cos_ports");
ret = EINVAL;
goto error_exit;
}
@@ -851,7 +851,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_sched_domain_create(handle, cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s\n",
+ DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s",
ret,
dlb2_error_strings[cfg->response.status]);
@@ -927,27 +927,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
dlb2_hw_reset_sched_domain(dev, true);
ret = dlb2_hw_query_resources(dlb2);
if (ret) {
- DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
+ DLB2_LOG_ERR("get resources err=%d, devid=%d",
ret, data->dev_id);
return ret;
}
}
if (config->nb_event_queues > rsrcs->num_queues) {
- DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).\n",
+ DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).",
config->nb_event_queues,
rsrcs->num_queues);
return -EINVAL;
}
if (config->nb_event_ports > (rsrcs->num_ldb_ports
+ rsrcs->num_dir_ports)) {
- DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).\n",
+ DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).",
config->nb_event_ports,
(rsrcs->num_ldb_ports + rsrcs->num_dir_ports));
return -EINVAL;
}
if (config->nb_events_limit > rsrcs->nb_events_limit) {
- DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).\n",
+ DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).",
config->nb_events_limit,
rsrcs->nb_events_limit);
return -EINVAL;
@@ -997,7 +997,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
if (dlb2_hw_create_sched_domain(dlb2, handle, rsrcs,
dlb2->version) < 0) {
- DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
+ DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed");
return -ENODEV;
}
@@ -1065,7 +1065,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
ret = dlb2_iface_get_sn_allocation(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -1085,7 +1085,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
ret = dlb2_iface_set_sn_allocation(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -1104,7 +1104,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
ret = dlb2_iface_get_sn_occupancy(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -1158,7 +1158,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
}
if (i == DLB2_NUM_SN_GROUPS) {
- DLB2_LOG_ERR("[%s()] No groups with %d sequence_numbers are available or have free slots\n",
+ DLB2_LOG_ERR("[%s()] No groups with %d sequence_numbers are available or have free slots",
__func__, sequence_numbers);
return;
}
@@ -1233,7 +1233,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_ldb_queue_create(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return -EINVAL;
}
@@ -1269,7 +1269,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
qm_qid = dlb2_hw_create_ldb_queue(dlb2, ev_queue, queue_conf);
if (qm_qid < 0) {
- DLB2_LOG_ERR("Failed to create the load-balanced queue\n");
+ DLB2_LOG_ERR("Failed to create the load-balanced queue");
return qm_qid;
}
@@ -1377,7 +1377,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
RTE_CACHE_LINE_SIZE);
if (qe == NULL) {
- DLB2_LOG_ERR("dlb2: no memory for consume_qe\n");
+ DLB2_LOG_ERR("dlb2: no memory for consume_qe");
return -ENOMEM;
}
qm_port->consume_qe = qe;
@@ -1409,7 +1409,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
RTE_CACHE_LINE_SIZE);
if (qe == NULL) {
- DLB2_LOG_ERR("dlb2: no memory for complete_qe\n");
+ DLB2_LOG_ERR("dlb2: no memory for complete_qe");
return -ENOMEM;
}
qm_port->int_arm_qe = qe;
@@ -1437,20 +1437,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
qm_port->qe4 = rte_zmalloc(mz_name, sz, RTE_CACHE_LINE_SIZE);
if (qm_port->qe4 == NULL) {
- DLB2_LOG_ERR("dlb2: no qe4 memory\n");
+ DLB2_LOG_ERR("dlb2: no qe4 memory");
ret = -ENOMEM;
goto error_exit;
}
ret = dlb2_init_int_arm_qe(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d", ret);
goto error_exit;
}
ret = dlb2_init_consume_qe(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_init_consume_qe ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: dlb2_init_consume_qe ret=%d", ret);
goto error_exit;
}
@@ -1533,14 +1533,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
return -EINVAL;
if (dequeue_depth < DLB2_MIN_CQ_DEPTH) {
- DLB2_LOG_ERR("dlb2: invalid cq depth, must be at least %d\n",
+ DLB2_LOG_ERR("dlb2: invalid cq depth, must be at least %d",
DLB2_MIN_CQ_DEPTH);
return -EINVAL;
}
if (dlb2->version == DLB2_HW_V2 && ev_port->cq_weight != 0 &&
ev_port->cq_weight > dequeue_depth) {
- DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d\n",
+ DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d",
dequeue_depth, ev_port->cq_weight);
return -EINVAL;
}
@@ -1576,7 +1576,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_ldb_port_create(handle, &cfg, dlb2->poll_mode);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_ldb_port_create error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: dlb2_ldb_port_create error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
goto error_exit;
}
@@ -1599,7 +1599,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
ret = dlb2_init_qe_mem(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d", ret);
goto error_exit;
}
@@ -1612,7 +1612,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_enable_cq_weight(handle, &cq_weight_args);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)",
ret,
dlb2_error_strings[cfg.response. status]);
goto error_exit;
@@ -1714,7 +1714,7 @@ error_exit:
rte_spinlock_unlock(&handle->resource_lock);
- DLB2_LOG_ERR("dlb2: create ldb port failed!\n");
+ DLB2_LOG_ERR("dlb2: create ldb port failed!");
return ret;
}
@@ -1758,13 +1758,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
return -EINVAL;
if (dequeue_depth < DLB2_MIN_CQ_DEPTH) {
- DLB2_LOG_ERR("dlb2: invalid dequeue_depth, must be %d-%d\n",
+ DLB2_LOG_ERR("dlb2: invalid dequeue_depth, must be %d-%d",
DLB2_MIN_CQ_DEPTH, DLB2_MAX_INPUT_QUEUE_DEPTH);
return -EINVAL;
}
if (enqueue_depth < DLB2_MIN_ENQUEUE_DEPTH) {
- DLB2_LOG_ERR("dlb2: invalid enqueue_depth, must be at least %d\n",
+ DLB2_LOG_ERR("dlb2: invalid enqueue_depth, must be at least %d",
DLB2_MIN_ENQUEUE_DEPTH);
return -EINVAL;
}
@@ -1799,7 +1799,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_dir_port_create(handle, &cfg, dlb2->poll_mode);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
goto error_exit;
}
@@ -1824,7 +1824,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
ret = dlb2_init_qe_mem(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d", ret);
goto error_exit;
}
@@ -1913,7 +1913,7 @@ error_exit:
rte_spinlock_unlock(&handle->resource_lock);
- DLB2_LOG_ERR("dlb2: create dir port failed!\n");
+ DLB2_LOG_ERR("dlb2: create dir port failed!");
return ret;
}
@@ -1929,7 +1929,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
int ret;
if (dev == NULL || port_conf == NULL) {
- DLB2_LOG_ERR("Null parameter\n");
+ DLB2_LOG_ERR("Null parameter");
return -EINVAL;
}
@@ -1947,7 +1947,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
ev_port = &dlb2->ev_ports[ev_port_id];
/* configured? */
if (ev_port->setup_done) {
- DLB2_LOG_ERR("evport %d is already configured\n", ev_port_id);
+ DLB2_LOG_ERR("evport %d is already configured", ev_port_id);
return -EINVAL;
}
@@ -1979,7 +1979,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
if (port_conf->enqueue_depth > sw_credit_quanta ||
port_conf->enqueue_depth > hw_credit_quanta) {
- DLB2_LOG_ERR("Invalid port config. Enqueue depth %d must be <= credit quanta %d and batch size %d\n",
+ DLB2_LOG_ERR("Invalid port config. Enqueue depth %d must be <= credit quanta %d and batch size %d",
port_conf->enqueue_depth,
sw_credit_quanta,
hw_credit_quanta);
@@ -2001,7 +2001,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
port_conf->dequeue_depth,
port_conf->enqueue_depth);
if (ret < 0) {
- DLB2_LOG_ERR("Failed to create the lB port ve portId=%d\n",
+ DLB2_LOG_ERR("Failed to create the lB port ve portId=%d",
ev_port_id);
return ret;
@@ -2012,7 +2012,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
port_conf->dequeue_depth,
port_conf->enqueue_depth);
if (ret < 0) {
- DLB2_LOG_ERR("Failed to create the DIR port\n");
+ DLB2_LOG_ERR("Failed to create the DIR port");
return ret;
}
}
@@ -2079,9 +2079,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
ret = dlb2_iface_map_qid(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: map qid error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: map qid error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
- DLB2_LOG_ERR("dlb2: grp=%d, qm_port=%d, qm_qid=%d prio=%d\n",
+ DLB2_LOG_ERR("dlb2: grp=%d, qm_port=%d, qm_qid=%d prio=%d",
handle->domain_id, cfg.port_id,
cfg.qid,
cfg.priority);
@@ -2114,7 +2114,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
first_avail = i;
}
if (first_avail == -1) {
- DLB2_LOG_ERR("dlb2: qm_port %d has no available QID slots.\n",
+ DLB2_LOG_ERR("dlb2: qm_port %d has no available QID slots.",
ev_port->qm_port.id);
return -EINVAL;
}
@@ -2151,7 +2151,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_dir_queue_create(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: create DIR event queue error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: create DIR event queue error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return -EINVAL;
}
@@ -2169,7 +2169,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
qm_qid = dlb2_hw_create_dir_queue(dlb2, ev_queue, ev_port->qm_port.id);
if (qm_qid < 0) {
- DLB2_LOG_ERR("Failed to create the DIR queue\n");
+ DLB2_LOG_ERR("Failed to create the DIR queue");
return qm_qid;
}
@@ -2199,7 +2199,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
err = dlb2_event_queue_join_ldb(dlb2, ev_port, ev_queue, prio);
if (err) {
- DLB2_LOG_ERR("port link failure for %s ev_q %d, ev_port %d\n",
+ DLB2_LOG_ERR("port link failure for %s ev_q %d, ev_port %d",
ev_queue->qm_queue.is_directed ? "DIR" : "LDB",
ev_queue->id, ev_port->id);
@@ -2237,7 +2237,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
queue_is_dir = ev_queue->qm_queue.is_directed;
if (port_is_dir != queue_is_dir) {
- DLB2_LOG_ERR("%s queue %u can't link to %s port %u\n",
+ DLB2_LOG_ERR("%s queue %u can't link to %s port %u",
queue_is_dir ? "DIR" : "LDB", ev_queue->id,
port_is_dir ? "DIR" : "LDB", ev_port->id);
@@ -2247,7 +2247,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
/* Check if there is space for the requested link */
if (!link_exists && index == -1) {
- DLB2_LOG_ERR("no space for new link\n");
+ DLB2_LOG_ERR("no space for new link");
rte_errno = -ENOSPC;
return -1;
}
@@ -2255,7 +2255,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
/* Check if the directed port is already linked */
if (ev_port->qm_port.is_directed && ev_port->num_links > 0 &&
!link_exists) {
- DLB2_LOG_ERR("Can't link DIR port %d to >1 queues\n",
+ DLB2_LOG_ERR("Can't link DIR port %d to >1 queues",
ev_port->id);
rte_errno = -EINVAL;
return -1;
@@ -2264,7 +2264,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
/* Check if the directed queue is already linked */
if (ev_queue->qm_queue.is_directed && ev_queue->num_links > 0 &&
!link_exists) {
- DLB2_LOG_ERR("Can't link DIR queue %d to >1 ports\n",
+ DLB2_LOG_ERR("Can't link DIR queue %d to >1 ports",
ev_queue->id);
rte_errno = -EINVAL;
return -1;
@@ -2286,14 +2286,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
RTE_SET_USED(dev);
if (ev_port == NULL) {
- DLB2_LOG_ERR("dlb2: evport not setup\n");
+ DLB2_LOG_ERR("dlb2: evport not setup");
rte_errno = -EINVAL;
return 0;
}
if (!ev_port->setup_done &&
ev_port->qm_port.config_state != DLB2_PREV_CONFIGURED) {
- DLB2_LOG_ERR("dlb2: evport not setup\n");
+ DLB2_LOG_ERR("dlb2: evport not setup");
rte_errno = -EINVAL;
return 0;
}
@@ -2378,7 +2378,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
ret = dlb2_iface_unmap_qid(handle, &cfg);
if (ret < 0)
- DLB2_LOG_ERR("dlb2: unmap qid error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: unmap qid error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
@@ -2431,7 +2431,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
RTE_SET_USED(dev);
if (!ev_port->setup_done) {
- DLB2_LOG_ERR("dlb2: evport %d is not configured\n",
+ DLB2_LOG_ERR("dlb2: evport %d is not configured",
ev_port->id);
rte_errno = -EINVAL;
return 0;
@@ -2456,7 +2456,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
int ret, j;
if (queues[i] >= dlb2->num_queues) {
- DLB2_LOG_ERR("dlb2: invalid queue id %d\n", queues[i]);
+ DLB2_LOG_ERR("dlb2: invalid queue id %d", queues[i]);
rte_errno = -EINVAL;
return i; /* return index of offending queue */
}
@@ -2474,7 +2474,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
ret = dlb2_event_queue_detach_ldb(dlb2, ev_port, ev_queue);
if (ret) {
- DLB2_LOG_ERR("unlink err=%d for port %d queue %d\n",
+ DLB2_LOG_ERR("unlink err=%d for port %d queue %d",
ret, ev_port->id, queues[i]);
rte_errno = -ENOENT;
return i; /* return index of offending queue */
@@ -2501,7 +2501,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
RTE_SET_USED(dev);
if (!ev_port->setup_done) {
- DLB2_LOG_ERR("dlb2: evport %d is not configured\n",
+ DLB2_LOG_ERR("dlb2: evport %d is not configured",
ev_port->id);
rte_errno = -EINVAL;
return 0;
@@ -2513,7 +2513,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
ret = dlb2_iface_pending_port_unmaps(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: num_unlinks_in_progress ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: num_unlinks_in_progress ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -2606,7 +2606,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
rte_spinlock_lock(&dlb2->qm_instance.resource_lock);
if (dlb2->run_state != DLB2_RUN_STATE_STOPPED) {
- DLB2_LOG_ERR("bad state %d for dev_start\n",
+ DLB2_LOG_ERR("bad state %d for dev_start",
(int)dlb2->run_state);
rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
return -EINVAL;
@@ -2642,7 +2642,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
ret = dlb2_iface_sched_domain_start(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: sched_domain_start ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: sched_domain_start ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -2887,7 +2887,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
case RTE_SCHED_TYPE_ORDERED:
DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
if (qm_queue->sched_type != RTE_SCHED_TYPE_ORDERED) {
- DLB2_LOG_ERR("dlb2: tried to send ordered event to unordered queue %d\n",
+ DLB2_LOG_ERR("dlb2: tried to send ordered event to unordered queue %d",
*queue_id);
rte_errno = -EINVAL;
return 1;
@@ -2906,7 +2906,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
*sched_type = DLB2_SCHED_UNORDERED;
break;
default:
- DLB2_LOG_ERR("Unsupported LDB sched type in put_qe\n");
+ DLB2_LOG_ERR("Unsupported LDB sched type in put_qe");
DLB2_INC_STAT(ev_port->stats.tx_invalid, 1);
rte_errno = -EINVAL;
return 1;
@@ -3153,7 +3153,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
int i;
if (port_id > dlb2->num_ports) {
- DLB2_LOG_ERR("Invalid port id %d in dlb2-event_release\n",
+ DLB2_LOG_ERR("Invalid port id %d in dlb2-event_release",
port_id);
rte_errno = -EINVAL;
return;
@@ -3210,7 +3210,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
sw_credit_update:
/* each release returns one credit */
if (unlikely(!ev_port->outstanding_releases)) {
- DLB2_LOG_ERR("%s: Outstanding releases underflowed.\n",
+ DLB2_LOG_ERR("%s: Outstanding releases underflowed.",
__func__);
return;
}
@@ -3364,7 +3364,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
* buffer is a mbuf.
*/
if (unlikely(qe->error)) {
- DLB2_LOG_ERR("QE error bit ON\n");
+ DLB2_LOG_ERR("QE error bit ON");
DLB2_INC_STAT(ev_port->stats.traffic.rx_drop, 1);
dlb2_consume_qe_immediate(qm_port, 1);
continue; /* Ignore */
@@ -4278,7 +4278,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_get_ldb_queue_depth(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_ldb_queue_depth ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_ldb_queue_depth ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -4298,7 +4298,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_get_dir_queue_depth(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_dir_queue_depth ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_dir_queue_depth ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -4389,7 +4389,7 @@ dlb2_drain(struct rte_eventdev *dev)
}
if (i == dlb2->num_ports) {
- DLB2_LOG_ERR("internal error: no LDB ev_ports\n");
+ DLB2_LOG_ERR("internal error: no LDB ev_ports");
return;
}
@@ -4397,7 +4397,7 @@ dlb2_drain(struct rte_eventdev *dev)
rte_event_port_unlink(dev_id, ev_port->id, NULL, 0);
if (rte_errno) {
- DLB2_LOG_ERR("internal error: failed to unlink ev_port %d\n",
+ DLB2_LOG_ERR("internal error: failed to unlink ev_port %d",
ev_port->id);
return;
}
@@ -4415,7 +4415,7 @@ dlb2_drain(struct rte_eventdev *dev)
/* Link the ev_port to the queue */
ret = rte_event_port_link(dev_id, ev_port->id, &qid, &prio, 1);
if (ret != 1) {
- DLB2_LOG_ERR("internal error: failed to link ev_port %d to queue %d\n",
+ DLB2_LOG_ERR("internal error: failed to link ev_port %d to queue %d",
ev_port->id, qid);
return;
}
@@ -4430,7 +4430,7 @@ dlb2_drain(struct rte_eventdev *dev)
/* Unlink the ev_port from the queue */
ret = rte_event_port_unlink(dev_id, ev_port->id, &qid, 1);
if (ret != 1) {
- DLB2_LOG_ERR("internal error: failed to unlink ev_port %d to queue %d\n",
+ DLB2_LOG_ERR("internal error: failed to unlink ev_port %d to queue %d",
ev_port->id, qid);
return;
}
@@ -4449,7 +4449,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
return;
} else if (dlb2->run_state != DLB2_RUN_STATE_STARTED) {
- DLB2_LOG_ERR("Internal error: bad state %d for dev_stop\n",
+ DLB2_LOG_ERR("Internal error: bad state %d for dev_stop",
(int)dlb2->run_state);
rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
return;
@@ -4605,7 +4605,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_open(&dlb2->qm_instance, name);
if (err < 0) {
- DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
+ DLB2_LOG_ERR("could not open event hardware device, err=%d",
err);
return err;
}
@@ -4613,14 +4613,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_get_device_version(&dlb2->qm_instance,
&dlb2->revision);
if (err < 0) {
- DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d\n",
+ DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d",
err);
return err;
}
err = dlb2_hw_query_resources(dlb2);
if (err) {
- DLB2_LOG_ERR("get resources err=%d for %s\n",
+ DLB2_LOG_ERR("get resources err=%d for %s",
err, name);
return err;
}
@@ -4643,7 +4643,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
break;
}
if (ret) {
- DLB2_LOG_ERR("dlb2: failed to configure class of service, err=%d\n",
+ DLB2_LOG_ERR("dlb2: failed to configure class of service, err=%d",
err);
return err;
}
@@ -4651,7 +4651,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_get_cq_poll_mode(&dlb2->qm_instance, &dlb2->poll_mode);
if (err < 0) {
- DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d\n",
+ DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d",
err);
return err;
}
@@ -4659,7 +4659,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
/* Complete xtstats runtime initialization */
err = dlb2_xstats_init(dlb2);
if (err) {
- DLB2_LOG_ERR("dlb2: failed to init xstats, err=%d\n", err);
+ DLB2_LOG_ERR("dlb2: failed to init xstats, err=%d", err);
return err;
}
@@ -4689,14 +4689,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_open(&dlb2->qm_instance, name);
if (err < 0) {
- DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
+ DLB2_LOG_ERR("could not open event hardware device, err=%d",
err);
return err;
}
err = dlb2_hw_query_resources(dlb2);
if (err) {
- DLB2_LOG_ERR("get resources err=%d for %s\n",
+ DLB2_LOG_ERR("get resources err=%d for %s",
err, name);
return err;
}
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index ff15271dda..28de48e24e 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -766,7 +766,7 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
fn = get_queue_stat;
break;
default:
- DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs->fn_id);
+ DLB2_LOG_ERR("Unexpected xstat fn_id %d", xs->fn_id);
goto invalid_value;
}
@@ -827,7 +827,7 @@ dlb2_eventdev_xstats_get_by_name(const struct rte_eventdev *dev,
fn = get_queue_stat;
break;
default:
- DLB2_LOG_ERR("Unexpected xstat fn_id %d\n",
+ DLB2_LOG_ERR("Unexpected xstat fn_id %d",
xs->fn_id);
return (uint64_t)-1;
}
@@ -865,7 +865,7 @@ dlb2_xstats_reset_range(struct dlb2_eventdev *dlb2, uint32_t start,
fn = get_queue_stat;
break;
default:
- DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs->fn_id);
+ DLB2_LOG_ERR("Unexpected xstat fn_id %d", xs->fn_id);
return;
}
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a95d3227a4..89eabc2a93 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -72,7 +72,7 @@ static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
};
if (retries == DLB2_READY_RETRY_LIMIT) {
- DLB2_LOG_ERR("[%s()] wait for device ready timed out\n",
+ DLB2_LOG_ERR("[%s()] wait for device ready timed out",
__func__);
return -1;
}
@@ -214,7 +214,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
pcie_cap_offset = rte_pci_find_capability(pdev, RTE_PCI_CAP_ID_EXP);
if (pcie_cap_offset < 0) {
- DLB2_LOG_ERR("[%s()] failed to find the pcie capability\n",
+ DLB2_LOG_ERR("[%s()] failed to find the pcie capability",
__func__);
return pcie_cap_offset;
}
@@ -261,7 +261,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = RTE_PCI_COMMAND;
cmd = 0;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pci command\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pci command",
__func__);
return ret;
}
@@ -273,7 +273,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVSTA;
ret = rte_pci_read_config(pdev, &devsta_busy_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to read the pci device status\n",
+ DLB2_LOG_ERR("[%s()] failed to read the pci device status",
__func__);
return ret;
}
@@ -286,7 +286,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
}
if (wait_count == 4) {
- DLB2_LOG_ERR("[%s()] wait for pci pending transactions timed out\n",
+ DLB2_LOG_ERR("[%s()] wait for pci pending transactions timed out",
__func__);
return -1;
}
@@ -294,7 +294,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL;
ret = rte_pci_read_config(pdev, &devctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to read the pcie device control\n",
+ DLB2_LOG_ERR("[%s()] failed to read the pcie device control",
__func__);
return ret;
}
@@ -303,7 +303,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &devctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie device control\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie device control",
__func__);
return ret;
}
@@ -316,7 +316,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL;
ret = rte_pci_write_config(pdev, &dev_ctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie device control at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie device control at offset %d",
__func__, (int)off);
return ret;
}
@@ -324,7 +324,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_LNKCTL;
ret = rte_pci_write_config(pdev, &lnk_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -332,7 +332,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_SLTCTL;
ret = rte_pci_write_config(pdev, &slt_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -340,7 +340,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_RTCTL;
ret = rte_pci_write_config(pdev, &rt_ctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -348,7 +348,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL2;
ret = rte_pci_write_config(pdev, &dev_ctl2_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -356,7 +356,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_LNKCTL2;
ret = rte_pci_write_config(pdev, &lnk_word2, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -364,7 +364,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_SLTCTL2;
ret = rte_pci_write_config(pdev, &slt_word2, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -376,7 +376,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pri_cap_offset + RTE_PCI_PRI_ALLOC_REQ;
ret = rte_pci_write_config(pdev, &pri_reqs_dword, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -384,7 +384,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pri_cap_offset + RTE_PCI_PRI_CTRL;
ret = rte_pci_write_config(pdev, &pri_ctrl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -402,7 +402,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &tmp, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -413,7 +413,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &tmp, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -424,7 +424,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &tmp, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -434,7 +434,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = (i - 1) * 4;
ret = rte_pci_write_config(pdev, &dword[i - 1], 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -444,7 +444,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
cmd &= ~RTE_PCI_COMMAND_INTX_DISABLE;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pci command\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pci command",
__func__);
return ret;
}
@@ -457,7 +457,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
cmd |= RTE_PCI_MSIX_FLAGS_ENABLE;
cmd |= RTE_PCI_MSIX_FLAGS_MASKALL;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write msix flags\n",
+ DLB2_LOG_ERR("[%s()] failed to write msix flags",
__func__);
return ret;
}
@@ -467,7 +467,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
cmd &= ~RTE_PCI_MSIX_FLAGS_MASKALL;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write msix flags\n",
+ DLB2_LOG_ERR("[%s()] failed to write msix flags",
__func__);
return ret;
}
@@ -493,7 +493,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -509,7 +509,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = acs_cap_offset + RTE_PCI_ACS_CTRL;
ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -520,7 +520,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
*/
off = DLB2_PCI_PASID_CAP_OFFSET;
if (rte_pci_pasid_set_state(pdev, off, false) < 0) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return -1;
}
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 3d15250e11..019e90f7e7 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -336,7 +336,7 @@ dlb2_pf_ldb_port_create(struct dlb2_hw_dev *handle,
/* Lock the page in memory */
ret = rte_mem_lock_page(port_base);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o\n");
+ DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o");
goto create_port_err;
}
@@ -411,7 +411,7 @@ dlb2_pf_dir_port_create(struct dlb2_hw_dev *handle,
/* Lock the page in memory */
ret = rte_mem_lock_page(port_base);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o\n");
+ DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o");
goto create_port_err;
}
@@ -737,7 +737,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
&dlb2_args,
dlb2->version);
if (ret) {
- DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
+ DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d",
ret, rte_errno);
goto dlb2_probe_failed;
}
@@ -748,7 +748,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev, probe_args);
if (dlb2->qm_instance.pf_dev == NULL) {
- DLB2_LOG_ERR("DLB2 PF Probe failed with error %d\n",
+ DLB2_LOG_ERR("DLB2 PF Probe failed with error %d",
rte_errno);
ret = -rte_errno;
goto dlb2_probe_failed;
@@ -766,13 +766,13 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
if (ret)
goto dlb2_probe_failed;
- DLB2_LOG_INFO("DLB2 PF Probe success\n");
+ DLB2_LOG_INFO("DLB2 PF Probe success");
return 0;
dlb2_probe_failed:
- DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d\n", ret);
+ DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d", ret);
return ret;
}
@@ -811,7 +811,7 @@ event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
event_dlb2_pf_name);
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
@@ -826,7 +826,7 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
@@ -845,7 +845,7 @@ event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
event_dlb2_pf_name);
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
@@ -860,7 +860,7 @@ event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index dd4e64395f..4658eaf3a2 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,7 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
ret = dpaa2_affine_qbman_swp();
if (ret < 0) {
DPAA2_EVENTDEV_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -276,7 +276,7 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
ret = dpaa2_affine_qbman_swp();
if (ret < 0) {
DPAA2_EVENTDEV_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -849,7 +849,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
for (i = 0; i < cryptodev->data->nb_queue_pairs; i++) {
ret = dpaa2_sec_eventq_attach(cryptodev, i, dpcon, ev);
if (ret) {
- DPAA2_EVENTDEV_ERR("dpaa2_sec_eventq_attach failed: ret %d\n",
+ DPAA2_EVENTDEV_ERR("dpaa2_sec_eventq_attach failed: ret %d",
ret);
goto fail;
}
@@ -883,7 +883,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
dpcon, &conf->ev);
if (ret) {
DPAA2_EVENTDEV_ERR(
- "dpaa2_sec_eventq_attach failed: ret: %d\n", ret);
+ "dpaa2_sec_eventq_attach failed: ret: %d", ret);
return ret;
}
return 0;
@@ -903,7 +903,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
ret = dpaa2_sec_eventq_detach(cdev, i);
if (ret) {
DPAA2_EVENTDEV_ERR(
- "dpaa2_sec_eventq_detach failed:ret %d\n", ret);
+ "dpaa2_sec_eventq_detach failed:ret %d", ret);
return ret;
}
}
@@ -926,7 +926,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
ret = dpaa2_sec_eventq_detach(cryptodev, rx_queue_id);
if (ret) {
DPAA2_EVENTDEV_ERR(
- "dpaa2_sec_eventq_detach failed: ret: %d\n", ret);
+ "dpaa2_sec_eventq_detach failed: ret: %d", ret);
return ret;
}
@@ -1159,7 +1159,7 @@ dpaa2_eventdev_destroy(const char *name)
eventdev = rte_event_pmd_get_named_dev(name);
if (eventdev == NULL) {
- RTE_EDEV_LOG_ERR("eventdev with name %s not allocated", name);
+ DPAA2_EVENTDEV_ERR("eventdev with name %s not allocated", name);
return -1;
}
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 090b3ed183..82f17144a6 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -196,7 +196,7 @@ timvf_ring_start(const struct rte_event_timer_adapter *adptr)
timr->tck_int = NSEC2CLK(timr->tck_nsec, rte_get_timer_hz());
timr->fast_div = rte_reciprocal_value_u64(timr->tck_int);
timvf_log_info("nb_bkts %d min_ns %"PRIu64" min_cyc %"PRIu64""
- " maxtmo %"PRIu64"\n",
+ " maxtmo %"PRIu64,
timr->nb_bkts, timr->tck_nsec, interval,
timr->max_tout);
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 0cccaf7e97..fe0c0ede6f 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -99,7 +99,7 @@ opdl_port_link(struct rte_eventdev *dev,
if (unlikely(dev->data->dev_started)) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to link queue (%u) to port %d while device started\n",
+ "Attempt to link queue (%u) to port %d while device started",
dev->data->dev_id,
queues[0],
p->id);
@@ -110,7 +110,7 @@ opdl_port_link(struct rte_eventdev *dev,
/* Max of 1 queue per port */
if (num > 1) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to link more than one queue (%u) to port %d requested\n",
+ "Attempt to link more than one queue (%u) to port %d requested",
dev->data->dev_id,
num,
p->id);
@@ -120,7 +120,7 @@ opdl_port_link(struct rte_eventdev *dev,
if (!p->configured) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "port %d not configured, cannot link to %u\n",
+ "port %d not configured, cannot link to %u",
dev->data->dev_id,
p->id,
queues[0]);
@@ -130,7 +130,7 @@ opdl_port_link(struct rte_eventdev *dev,
if (p->external_qid != OPDL_INVALID_QID) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "port %d already linked to queue %u, cannot link to %u\n",
+ "port %d already linked to queue %u, cannot link to %u",
dev->data->dev_id,
p->id,
p->external_qid,
@@ -157,7 +157,7 @@ opdl_port_unlink(struct rte_eventdev *dev,
if (unlikely(dev->data->dev_started)) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to unlink queue (%u) to port %d while device started\n",
+ "Attempt to unlink queue (%u) to port %d while device started",
dev->data->dev_id,
queues[0],
p->id);
@@ -188,7 +188,7 @@ opdl_port_setup(struct rte_eventdev *dev,
/* Check if port already configured */
if (p->configured) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to setup port %d which is already setup\n",
+ "Attempt to setup port %d which is already setup",
dev->data->dev_id,
p->id);
return -EDQUOT;
@@ -244,7 +244,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
/* Extra sanity check, probably not needed */
if (queue_id == OPDL_INVALID_QID) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Invalid queue id %u requested\n",
+ "Invalid queue id %u requested",
dev->data->dev_id,
queue_id);
return -EINVAL;
@@ -252,7 +252,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
if (device->nb_q_md > device->max_queue_nb) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Max number of queues %u exceeded by request %u\n",
+ "Max number of queues %u exceeded by request %u",
dev->data->dev_id,
device->max_queue_nb,
device->nb_q_md);
@@ -262,7 +262,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
& conf->event_queue_cfg) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "QUEUE_CFG_ALL_TYPES not supported\n",
+ "QUEUE_CFG_ALL_TYPES not supported",
dev->data->dev_id);
return -ENOTSUP;
} else if (RTE_EVENT_QUEUE_CFG_SINGLE_LINK
@@ -281,7 +281,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
break;
default:
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Unknown queue type %d requested\n",
+ "Unknown queue type %d requested",
dev->data->dev_id,
conf->event_queue_cfg);
return -EINVAL;
@@ -292,7 +292,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
for (i = 0; i < device->nb_q_md; i++) {
if (device->q_md[i].ext_id == queue_id) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "queue id %u already setup\n",
+ "queue id %u already setup",
dev->data->dev_id,
queue_id);
return -EINVAL;
@@ -352,7 +352,7 @@ opdl_dev_configure(const struct rte_eventdev *dev)
if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "DEQUEUE_TIMEOUT not supported\n",
+ "DEQUEUE_TIMEOUT not supported",
dev->data->dev_id);
return -ENOTSUP;
}
@@ -659,7 +659,7 @@ opdl_probe(struct rte_vdev_device *vdev)
if (!kvlist) {
PMD_DRV_LOG(INFO,
- "Ignoring unsupported parameters when creating device '%s'\n",
+ "Ignoring unsupported parameters when creating device '%s'",
name);
} else {
int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
@@ -706,7 +706,7 @@ opdl_probe(struct rte_vdev_device *vdev)
PMD_DRV_LOG(INFO, "DEV_ID:[%02d] : "
"Success - creating eventdev device %s, numa_node:[%d], do_validation:[%s]"
- " , self_test:[%s]\n",
+ " , self_test:[%s]",
dev->data->dev_id,
name,
socket_id,
@@ -750,7 +750,7 @@ opdl_remove(struct rte_vdev_device *vdev)
if (name == NULL)
return -EINVAL;
- PMD_DRV_LOG(INFO, "Closing eventdev opdl device %s\n", name);
+ PMD_DRV_LOG(INFO, "Closing eventdev opdl device %s", name);
return rte_event_pmd_vdev_uninit(name);
}
diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c
index b69c4769dc..9b0c4db5ce 100644
--- a/drivers/event/opdl/opdl_test.c
+++ b/drivers/event/opdl/opdl_test.c
@@ -101,7 +101,7 @@ init(struct test *t, int nb_queues, int nb_ports)
ret = rte_event_dev_configure(evdev, &config);
if (ret < 0)
- PMD_DRV_LOG(ERR, "%d: Error configuring device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error configuring device", __LINE__);
return ret;
};
@@ -119,7 +119,7 @@ create_ports(struct test *t, int num_ports)
for (i = 0; i < num_ports; i++) {
if (rte_event_port_setup(evdev, i, &conf) < 0) {
- PMD_DRV_LOG(ERR, "Error setting up port %d\n", i);
+ PMD_DRV_LOG(ERR, "Error setting up port %d", i);
return -1;
}
t->port[i] = i;
@@ -158,7 +158,7 @@ create_queues_type(struct test *t, int num_qids, enum queue_type flags)
for (i = t->nb_qids ; i < t->nb_qids + num_qids; i++) {
if (rte_event_queue_setup(evdev, i, &conf) < 0) {
- PMD_DRV_LOG(ERR, "%d: error creating qid %d\n ",
+ PMD_DRV_LOG(ERR, "%d: error creating qid %d ",
__LINE__, i);
return -1;
}
@@ -180,7 +180,7 @@ cleanup(struct test *t __rte_unused)
{
rte_event_dev_stop(evdev);
rte_event_dev_close(evdev);
- PMD_DRV_LOG(ERR, "clean up for test done\n");
+ PMD_DRV_LOG(ERR, "clean up for test done");
return 0;
};
@@ -202,7 +202,7 @@ ordered_basic(struct test *t)
if (init(t, 2, tx_port+1) < 0 ||
create_ports(t, tx_port+1) < 0 ||
create_queues_type(t, 2, OPDL_Q_TYPE_ORDERED)) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -226,7 +226,7 @@ ordered_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n",
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid",
__LINE__);
cleanup(t);
return -1;
@@ -236,13 +236,13 @@ ordered_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[tx_port], &t->qid[1], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping TX qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping TX qid", __LINE__);
cleanup(t);
return -1;
}
if (rte_event_dev_start(evdev) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error with start call\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error with start call", __LINE__);
return -1;
}
/* Enqueue 3 packets to the rx port */
@@ -250,7 +250,7 @@ ordered_basic(struct test *t)
struct rte_event ev;
mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
if (!mbufs[i]) {
- PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
return -1;
}
@@ -262,7 +262,7 @@ ordered_basic(struct test *t)
/* generate pkt and enqueue */
err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
__LINE__, i, err);
return -1;
}
@@ -278,7 +278,7 @@ ordered_basic(struct test *t)
deq_pkts = rte_event_dequeue_burst(evdev, t->port[i],
&deq_ev[i], 1, 0);
if (deq_pkts != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to deq\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Failed to deq", __LINE__);
rte_event_dev_dump(evdev, stdout);
return -1;
}
@@ -286,7 +286,7 @@ ordered_basic(struct test *t)
if (seq != (i-1)) {
PMD_DRV_LOG(ERR, " seq test failed ! eq is %d , "
- "port number is %u\n", seq, i);
+ "port number is %u", seq, i);
return -1;
}
}
@@ -298,7 +298,7 @@ ordered_basic(struct test *t)
deq_ev[i].queue_id = t->qid[1];
err = rte_event_enqueue_burst(evdev, t->port[i], &deq_ev[i], 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue", __LINE__);
return -1;
}
}
@@ -309,7 +309,7 @@ ordered_basic(struct test *t)
/* Check to see if we've got all 3 packets */
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d\n",
+ PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d",
__LINE__, deq_pkts, tx_port);
rte_event_dev_dump(evdev, stdout);
return 1;
@@ -339,7 +339,7 @@ atomic_basic(struct test *t)
if (init(t, 2, tx_port+1) < 0 ||
create_ports(t, tx_port+1) < 0 ||
create_queues_type(t, 2, OPDL_Q_TYPE_ATOMIC)) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -364,7 +364,7 @@ atomic_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n",
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid",
__LINE__);
cleanup(t);
return -1;
@@ -374,13 +374,13 @@ atomic_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[tx_port], &t->qid[1], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping TX qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping TX qid", __LINE__);
cleanup(t);
return -1;
}
if (rte_event_dev_start(evdev) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error with start call\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error with start call", __LINE__);
return -1;
}
@@ -389,7 +389,7 @@ atomic_basic(struct test *t)
struct rte_event ev;
mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
if (!mbufs[i]) {
- PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
return -1;
}
@@ -402,7 +402,7 @@ atomic_basic(struct test *t)
/* generate pkt and enqueue */
err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
__LINE__, i, err);
return -1;
}
@@ -419,7 +419,7 @@ atomic_basic(struct test *t)
if (t->port[i] != 2) {
if (deq_pkts != 0) {
- PMD_DRV_LOG(ERR, "%d: deq none zero !\n",
+ PMD_DRV_LOG(ERR, "%d: deq none zero !",
__LINE__);
rte_event_dev_dump(evdev, stdout);
return -1;
@@ -427,7 +427,7 @@ atomic_basic(struct test *t)
} else {
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: deq not eqal to 3 %u !\n",
+ PMD_DRV_LOG(ERR, "%d: deq not eqal to 3 %u !",
__LINE__, deq_pkts);
rte_event_dev_dump(evdev, stdout);
return -1;
@@ -444,7 +444,7 @@ atomic_basic(struct test *t)
if (err != 3) {
PMD_DRV_LOG(ERR, "port %d: Failed to enqueue pkt %u, "
- "retval = %u\n",
+ "retval = %u",
t->port[i], 3, err);
return -1;
}
@@ -460,7 +460,7 @@ atomic_basic(struct test *t)
/* Check to see if we've got all 3 packets */
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d\n",
+ PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d",
__LINE__, deq_pkts, tx_port);
rte_event_dev_dump(evdev, stdout);
return 1;
@@ -568,7 +568,7 @@ single_link_w_stats(struct test *t)
create_ports(t, 3) < 0 || /* 0,1,2 */
create_queues_type(t, 1, OPDL_Q_TYPE_SINGLE_LINK) < 0 ||
create_queues_type(t, 1, OPDL_Q_TYPE_ORDERED) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -587,7 +587,7 @@ single_link_w_stats(struct test *t)
err = rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]\n",
+ PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]",
__LINE__,
t->port[1],
t->qid[0]);
@@ -598,7 +598,7 @@ single_link_w_stats(struct test *t)
err = rte_event_port_link(evdev, t->port[2], &t->qid[1], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]\n",
+ PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]",
__LINE__,
t->port[2],
t->qid[1]);
@@ -607,7 +607,7 @@ single_link_w_stats(struct test *t)
}
if (rte_event_dev_start(evdev) != 0) {
- PMD_DRV_LOG(ERR, "%d: failed to start device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: failed to start device", __LINE__);
cleanup(t);
return -1;
}
@@ -619,7 +619,7 @@ single_link_w_stats(struct test *t)
struct rte_event ev;
mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
if (!mbufs[i]) {
- PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
return -1;
}
@@ -631,7 +631,7 @@ single_link_w_stats(struct test *t)
/* generate pkt and enqueue */
err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
__LINE__,
t->port[rx_port],
err);
@@ -647,7 +647,7 @@ single_link_w_stats(struct test *t)
deq_ev, 3, 0);
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: deq not 3 !\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: deq not 3 !", __LINE__);
cleanup(t);
return -1;
}
@@ -662,7 +662,7 @@ single_link_w_stats(struct test *t)
NEW_NUM_PACKETS);
if (deq_pkts != 2) {
- PMD_DRV_LOG(ERR, "%d: enq not 2 but %u!\n", __LINE__, deq_pkts);
+ PMD_DRV_LOG(ERR, "%d: enq not 2 but %u!", __LINE__, deq_pkts);
cleanup(t);
return -1;
}
@@ -676,7 +676,7 @@ single_link_w_stats(struct test *t)
/* Check to see if we've got all 2 packets */
if (deq_pkts != 2) {
- PMD_DRV_LOG(ERR, "%d: expected 2 pkts at tx port got %d from port %d\n",
+ PMD_DRV_LOG(ERR, "%d: expected 2 pkts at tx port got %d from port %d",
__LINE__, deq_pkts, tx_port);
cleanup(t);
return -1;
@@ -706,7 +706,7 @@ single_link(struct test *t)
create_ports(t, 3) < 0 || /* 0,1,2 */
create_queues_type(t, 1, OPDL_Q_TYPE_SINGLE_LINK) < 0 ||
create_queues_type(t, 1, OPDL_Q_TYPE_ORDERED) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -725,7 +725,7 @@ single_link(struct test *t)
err = rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid", __LINE__);
cleanup(t);
return -1;
}
@@ -733,14 +733,14 @@ single_link(struct test *t)
err = rte_event_port_link(evdev, t->port[2], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid", __LINE__);
cleanup(t);
return -1;
}
if (rte_event_dev_start(evdev) == 0) {
PMD_DRV_LOG(ERR, "%d: start DIDN'T FAIL with more than 1 "
- "SINGLE_LINK PORT\n", __LINE__);
+ "SINGLE_LINK PORT", __LINE__);
cleanup(t);
return -1;
}
@@ -789,7 +789,7 @@ qid_basic(struct test *t)
if (init(t, NUM_QUEUES, NUM_QUEUES+1) < 0 ||
create_ports(t, NUM_QUEUES+1) < 0 ||
create_queues_type(t, NUM_QUEUES, OPDL_Q_TYPE_ORDERED)) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -805,7 +805,7 @@ qid_basic(struct test *t)
if (nb_linked != 1) {
- PMD_DRV_LOG(ERR, "%s:%d: error mapping port:%u to queue:%u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: error mapping port:%u to queue:%u",
__FILE__,
__LINE__,
i + 1,
@@ -826,7 +826,7 @@ qid_basic(struct test *t)
&t_qid,
NULL,
1) > 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Second call to port link on same port DID NOT fail\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Second call to port link on same port DID NOT fail",
__FILE__,
__LINE__);
err = -1;
@@ -841,7 +841,7 @@ qid_basic(struct test *t)
BATCH_SIZE,
0);
if (test_num_events != 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing 0 packets from port %u on stopped device\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing 0 packets from port %u on stopped device",
__FILE__,
__LINE__,
p_id);
@@ -855,7 +855,7 @@ qid_basic(struct test *t)
ev,
BATCH_SIZE);
if (test_num_events != 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing 0 packets to port %u on stopped device\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing 0 packets to port %u on stopped device",
__FILE__,
__LINE__,
p_id);
@@ -868,7 +868,7 @@ qid_basic(struct test *t)
/* Start the device */
if (!err) {
if (rte_event_dev_start(evdev) < 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Error with start call\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error with start call",
__FILE__,
__LINE__);
err = -1;
@@ -884,7 +884,7 @@ qid_basic(struct test *t)
&t_qid,
NULL,
1) > 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Call to port link on started device DID NOT fail\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Call to port link on started device DID NOT fail",
__FILE__,
__LINE__);
err = -1;
@@ -904,7 +904,7 @@ qid_basic(struct test *t)
ev,
BATCH_SIZE);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing rx packets\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing rx packets",
__FILE__,
__LINE__);
err = -1;
@@ -921,7 +921,7 @@ qid_basic(struct test *t)
0);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from port %u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from port %u",
__FILE__,
__LINE__,
p_id);
@@ -930,7 +930,7 @@ qid_basic(struct test *t)
}
if (ev[0].queue_id != q_id) {
- PMD_DRV_LOG(ERR, "%s:%d: Error event portid[%u] q_id:[%u] does not match expected:[%u]\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error event portid[%u] q_id:[%u] does not match expected:[%u]",
__FILE__,
__LINE__,
p_id,
@@ -949,7 +949,7 @@ qid_basic(struct test *t)
ev,
BATCH_SIZE);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing packets from port:%u to queue:%u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing packets from port:%u to queue:%u",
__FILE__,
__LINE__,
p_id,
@@ -967,7 +967,7 @@ qid_basic(struct test *t)
BATCH_SIZE,
0);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from tx port %u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from tx port %u",
__FILE__,
__LINE__,
p_id);
@@ -993,17 +993,17 @@ opdl_selftest(void)
evdev = rte_event_dev_get_dev_id(eventdev_name);
if (evdev < 0) {
- PMD_DRV_LOG(ERR, "%d: Eventdev %s not found - creating.\n",
+ PMD_DRV_LOG(ERR, "%d: Eventdev %s not found - creating.",
__LINE__, eventdev_name);
/* turn on stats by default */
if (rte_vdev_init(eventdev_name, "do_validation=1") < 0) {
- PMD_DRV_LOG(ERR, "Error creating eventdev\n");
+ PMD_DRV_LOG(ERR, "Error creating eventdev");
free(t);
return -1;
}
evdev = rte_event_dev_get_dev_id(eventdev_name);
if (evdev < 0) {
- PMD_DRV_LOG(ERR, "Error finding newly created eventdev\n");
+ PMD_DRV_LOG(ERR, "Error finding newly created eventdev");
free(t);
return -1;
}
@@ -1019,27 +1019,27 @@ opdl_selftest(void)
512, /* use very small mbufs */
rte_socket_id());
if (!eventdev_func_mempool) {
- PMD_DRV_LOG(ERR, "ERROR creating mempool\n");
+ PMD_DRV_LOG(ERR, "ERROR creating mempool");
free(t);
return -1;
}
}
t->mbuf_pool = eventdev_func_mempool;
- PMD_DRV_LOG(ERR, "*** Running Ordered Basic test...\n");
+ PMD_DRV_LOG(ERR, "*** Running Ordered Basic test...");
ret = ordered_basic(t);
- PMD_DRV_LOG(ERR, "*** Running Atomic Basic test...\n");
+ PMD_DRV_LOG(ERR, "*** Running Atomic Basic test...");
ret = atomic_basic(t);
- PMD_DRV_LOG(ERR, "*** Running QID Basic test...\n");
+ PMD_DRV_LOG(ERR, "*** Running QID Basic test...");
ret = qid_basic(t);
- PMD_DRV_LOG(ERR, "*** Running SINGLE LINK failure test...\n");
+ PMD_DRV_LOG(ERR, "*** Running SINGLE LINK failure test...");
ret = single_link(t);
- PMD_DRV_LOG(ERR, "*** Running SINGLE LINK w stats test...\n");
+ PMD_DRV_LOG(ERR, "*** Running SINGLE LINK w stats test...");
ret = single_link_w_stats(t);
/*
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 2096496917..babe77a20f 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -173,7 +173,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
dev->data->socket_id,
RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->rx_worker_ring == NULL) {
- SW_LOG_ERR("Error creating RX worker ring for port %d\n",
+ SW_LOG_ERR("Error creating RX worker ring for port %d",
port_id);
return -1;
}
@@ -193,7 +193,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->cq_worker_ring == NULL) {
rte_event_ring_free(p->rx_worker_ring);
- SW_LOG_ERR("Error creating CQ worker ring for port %d\n",
+ SW_LOG_ERR("Error creating CQ worker ring for port %d",
port_id);
return -1;
}
@@ -253,7 +253,7 @@ qid_init(struct sw_evdev *sw, unsigned int idx, int type,
if (!window_size) {
SW_LOG_DBG(
- "invalid reorder_window_size for ordered queue\n"
+ "invalid reorder_window_size for ordered queue"
);
goto cleanup;
}
@@ -262,7 +262,7 @@ qid_init(struct sw_evdev *sw, unsigned int idx, int type,
window_size * sizeof(qid->reorder_buffer[0]),
0, socket_id);
if (!qid->reorder_buffer) {
- SW_LOG_DBG("reorder_buffer malloc failed\n");
+ SW_LOG_DBG("reorder_buffer malloc failed");
goto cleanup;
}
@@ -334,7 +334,7 @@ sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
type = SW_SCHED_TYPE_DIRECT;
} else if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
& conf->event_queue_cfg) {
- SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
+ SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported");
return -ENOTSUP;
}
@@ -769,7 +769,7 @@ sw_start(struct rte_eventdev *dev)
/* check a service core is mapped to this service */
if (!rte_service_runstate_get(sw->service_id)) {
- SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
+ SW_LOG_ERR("Warning: No Service core enabled on service %s",
sw->service_name);
return -ENOENT;
}
@@ -777,7 +777,7 @@ sw_start(struct rte_eventdev *dev)
/* check all ports are set up */
for (i = 0; i < sw->port_count; i++)
if (sw->ports[i].rx_worker_ring == NULL) {
- SW_LOG_ERR("Port %d not configured\n", i);
+ SW_LOG_ERR("Port %d not configured", i);
return -ESTALE;
}
@@ -785,7 +785,7 @@ sw_start(struct rte_eventdev *dev)
for (i = 0; i < sw->qid_count; i++)
if (!sw->qids[i].initialized ||
sw->qids[i].cq_num_mapped_cqs == 0) {
- SW_LOG_ERR("Queue %d not configured\n", i);
+ SW_LOG_ERR("Queue %d not configured", i);
return -ENOLINK;
}
@@ -997,7 +997,7 @@ sw_probe(struct rte_vdev_device *vdev)
if (!kvlist) {
SW_LOG_INFO(
- "Ignoring unsupported parameters when creating device '%s'\n",
+ "Ignoring unsupported parameters when creating device '%s'",
name);
} else {
int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
@@ -1067,7 +1067,7 @@ sw_probe(struct rte_vdev_device *vdev)
SW_LOG_INFO(
"Creating eventdev sw device %s, numa_node=%d, "
"sched_quanta=%d, credit_quanta=%d "
- "min_burst=%d, deq_burst=%d, refill_once=%d\n",
+ "min_burst=%d, deq_burst=%d, refill_once=%d",
name, socket_id, sched_quanta, credit_quanta,
min_burst_size, deq_burst_size, refill_once);
@@ -1131,7 +1131,7 @@ sw_remove(struct rte_vdev_device *vdev)
if (name == NULL)
return -EINVAL;
- SW_LOG_INFO("Closing eventdev sw device %s\n", name);
+ SW_LOG_INFO("Closing eventdev sw device %s", name);
return rte_event_pmd_vdev_uninit(name);
}
diff --git a/drivers/event/sw/sw_evdev_xstats.c b/drivers/event/sw/sw_evdev_xstats.c
index fbac8f3ab5..076b982ab8 100644
--- a/drivers/event/sw/sw_evdev_xstats.c
+++ b/drivers/event/sw/sw_evdev_xstats.c
@@ -419,7 +419,7 @@ sw_xstats_get_names(const struct rte_eventdev *dev,
start_offset = sw->xstats_offset_for_qid[queue_port_id];
break;
default:
- SW_LOG_ERR("Invalid mode received in sw_xstats_get_names()\n");
+ SW_LOG_ERR("Invalid mode received in sw_xstats_get_names()");
return -EINVAL;
};
@@ -470,7 +470,7 @@ sw_xstats_update(struct sw_evdev *sw, enum rte_event_dev_xstats_mode mode,
xstats_mode_count = sw->xstats_count_per_qid[queue_port_id];
break;
default:
- SW_LOG_ERR("Invalid mode received in sw_xstats_get()\n");
+ SW_LOG_ERR("Invalid mode received in sw_xstats_get()");
goto invalid_value;
};
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 84371d5d1a..b0c6d153e4 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -67,7 +67,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_MEMPOOL_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
goto err1;
}
@@ -198,7 +198,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
ret = dpaa2_affine_qbman_swp();
if (ret != 0) {
DPAA2_MEMPOOL_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return;
}
@@ -342,7 +342,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
ret = dpaa2_affine_qbman_swp();
if (ret != 0) {
DPAA2_MEMPOOL_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return ret;
}
@@ -457,7 +457,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
msl = rte_mem_virt2memseg_list(vaddr);
if (!msl) {
- DPAA2_MEMPOOL_DEBUG("Memsegment is External.\n");
+ DPAA2_MEMPOOL_DEBUG("Memsegment is External.");
rte_fslmc_vfio_mem_dmamap((size_t)vaddr,
(size_t)paddr, (size_t)len);
}
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 1513c632c6..966fee8bfe 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -134,7 +134,7 @@ octeontx_fpa_gpool_alloc(unsigned int object_size)
if (res->sz128 == 0) {
res->sz128 = sz128;
- fpavf_log_dbg("gpool %d blk_sz %d\n", res->vf_id,
+ fpavf_log_dbg("gpool %d blk_sz %d", res->vf_id,
sz128);
return res->vf_id;
@@ -273,7 +273,7 @@ octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size,
goto err;
}
- fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n",
+ fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64,
fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg,
cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg);
@@ -351,8 +351,7 @@ octeontx_fpapf_aura_attach(unsigned int gpool_index)
sizeof(struct octeontx_mbox_fpa_cfg),
&resp, sizeof(resp));
if (ret < 0) {
- fpavf_log_err("Could not attach fpa ");
- fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n",
+ fpavf_log_err("Could not attach fpa aura %d to pool %d. Err=%d. FuncErr=%d",
FPA_AURA_IDX(gpool_index), gpool_index, ret,
hdr.res_code);
ret = -EACCES;
@@ -380,7 +379,7 @@ octeontx_fpapf_aura_detach(unsigned int gpool_index)
hdr.vfid = gpool_index;
ret = octeontx_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0);
if (ret < 0) {
- fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n",
+ fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d",
FPA_AURA_IDX(gpool_index), ret,
hdr.res_code);
ret = -EINVAL;
@@ -428,8 +427,7 @@ octeontx_fpapf_start_count(uint16_t gpool_index)
hdr.vfid = gpool_index;
ret = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0);
if (ret < 0) {
- fpavf_log_err("Could not start buffer counting for ");
- fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n",
+ fpavf_log_err("Could not start buffer counting for FPA pool %d. Err=%d. FuncErr=%d",
gpool_index, ret, hdr.res_code);
ret = -EINVAL;
goto err;
@@ -636,7 +634,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
FPA_VF_VHAURA_CNT(gaura)));
if (cnt) {
- fpavf_log_dbg("buffer exist in pool cnt %" PRId64 "\n", cnt);
+ fpavf_log_dbg("buffer exist in pool cnt %" PRId64, cnt);
return -EBUSY;
}
@@ -664,7 +662,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
(pool_bar + FPA_VF_VHAURA_OP_ALLOC(gaura)));
if (node == NULL) {
- fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf\n",
+ fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf",
gaura, avail);
break;
}
@@ -684,7 +682,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
curr = curr[0]) {
if (curr == curr[0] ||
((uintptr_t)curr != ((uintptr_t)curr[0] - sz))) {
- fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)\n",
+ fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)",
gpool, curr, curr[0]);
}
}
@@ -705,7 +703,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
ret = octeontx_fpapf_aura_detach(gpool);
if (ret) {
- fpavf_log_err("Failed to detach gaura %u. error code=%d\n",
+ fpavf_log_err("Failed to detach gaura %u. error code=%d",
gpool, ret);
}
@@ -757,7 +755,7 @@ octeontx_fpavf_identify(void *bar0)
stack_ln_ptr = fpavf_read64((void *)((uintptr_t)bar0 +
FPA_VF_VHPOOL_THRESHOLD(0)));
if (vf_idx >= FPA_VF_MAX) {
- fpavf_log_err("vf_id(%d) greater than max vf (32)\n", vf_id);
+ fpavf_log_err("vf_id(%d) greater than max vf (32)", vf_id);
return -E2BIG;
}
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index f4de1c8412..631e521b58 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -27,11 +27,11 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
goto _end;
if ((uint32_t)rc != object_size)
- fpavf_log_err("buffer size mismatch: %d instead of %u\n",
+ fpavf_log_err("buffer size mismatch: %d instead of %u",
rc, object_size);
- fpavf_log_info("Pool created %p with .. ", (void *)pool);
- fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count);
+ fpavf_log_info("Pool created %p with .. obj_sz %d, cnt %d",
+ (void *)pool, object_size, memseg_count);
/* assign pool handle to mempool */
mp->pool_id = (uint64_t)pool;
diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c
index 41f3b7a95d..3c328d9d0e 100644
--- a/drivers/ml/cnxk/cn10k_ml_dev.c
+++ b/drivers/ml/cnxk/cn10k_ml_dev.c
@@ -108,14 +108,14 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
kvlist = rte_kvargs_parse(devargs->args, valid_args);
if (kvlist == NULL) {
- plt_err("Error parsing devargs\n");
+ plt_err("Error parsing devargs");
return -EINVAL;
}
if (rte_kvargs_count(kvlist, CN10K_ML_FW_PATH) == 1) {
ret = rte_kvargs_process(kvlist, CN10K_ML_FW_PATH, &parse_string_arg, &fw_path);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_PATH);
+ plt_err("Error processing arguments, key = %s", CN10K_ML_FW_PATH);
ret = -EINVAL;
goto exit;
}
@@ -126,7 +126,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_FW_ENABLE_DPE_WARNINGS,
&parse_integer_arg, &cn10k_mldev->fw.enable_dpe_warnings);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_FW_ENABLE_DPE_WARNINGS);
ret = -EINVAL;
goto exit;
@@ -138,7 +138,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_FW_REPORT_DPE_WARNINGS,
&parse_integer_arg, &cn10k_mldev->fw.report_dpe_warnings);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_FW_REPORT_DPE_WARNINGS);
ret = -EINVAL;
goto exit;
@@ -150,7 +150,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_CACHE_MODEL_DATA, &parse_integer_arg,
&cn10k_mldev->cache_model_data);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_DEV_CACHE_MODEL_DATA);
ret = -EINVAL;
goto exit;
@@ -162,7 +162,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_ALLOC_MODE, &parse_string_arg,
&ocm_alloc_mode);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_ALLOC_MODE);
+ plt_err("Error processing arguments, key = %s", CN10K_ML_OCM_ALLOC_MODE);
ret = -EINVAL;
goto exit;
}
@@ -173,7 +173,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK, &parse_integer_arg,
&cn10k_mldev->hw_queue_lock);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_DEV_HW_QUEUE_LOCK);
ret = -EINVAL;
goto exit;
@@ -185,7 +185,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_PAGE_SIZE, &parse_integer_arg,
&cn10k_mldev->ocm_page_size);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_PAGE_SIZE);
+ plt_err("Error processing arguments, key = %s", CN10K_ML_OCM_PAGE_SIZE);
ret = -EINVAL;
goto exit;
}
@@ -204,7 +204,7 @@ check_args:
} else {
if ((cn10k_mldev->fw.enable_dpe_warnings < 0) ||
(cn10k_mldev->fw.enable_dpe_warnings > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_ENABLE_DPE_WARNINGS,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_FW_ENABLE_DPE_WARNINGS,
cn10k_mldev->fw.enable_dpe_warnings);
ret = -EINVAL;
goto exit;
@@ -218,7 +218,7 @@ check_args:
} else {
if ((cn10k_mldev->fw.report_dpe_warnings < 0) ||
(cn10k_mldev->fw.report_dpe_warnings > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_REPORT_DPE_WARNINGS,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_FW_REPORT_DPE_WARNINGS,
cn10k_mldev->fw.report_dpe_warnings);
ret = -EINVAL;
goto exit;
@@ -231,7 +231,7 @@ check_args:
cn10k_mldev->cache_model_data = CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT;
} else {
if ((cn10k_mldev->cache_model_data < 0) || (cn10k_mldev->cache_model_data > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_CACHE_MODEL_DATA,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_DEV_CACHE_MODEL_DATA,
cn10k_mldev->cache_model_data);
ret = -EINVAL;
goto exit;
@@ -244,7 +244,7 @@ check_args:
} else {
if (!((strcmp(ocm_alloc_mode, "lowest") == 0) ||
(strcmp(ocm_alloc_mode, "largest") == 0))) {
- plt_err("Invalid argument, %s = %s\n", CN10K_ML_OCM_ALLOC_MODE,
+ plt_err("Invalid argument, %s = %s", CN10K_ML_OCM_ALLOC_MODE,
ocm_alloc_mode);
ret = -EINVAL;
goto exit;
@@ -257,7 +257,7 @@ check_args:
cn10k_mldev->hw_queue_lock = CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT;
} else {
if ((cn10k_mldev->hw_queue_lock < 0) || (cn10k_mldev->hw_queue_lock > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_HW_QUEUE_LOCK,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK,
cn10k_mldev->hw_queue_lock);
ret = -EINVAL;
goto exit;
@@ -269,7 +269,7 @@ check_args:
cn10k_mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT;
} else {
if (cn10k_mldev->ocm_page_size < 0) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_OCM_PAGE_SIZE,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_OCM_PAGE_SIZE,
cn10k_mldev->ocm_page_size);
ret = -EINVAL;
goto exit;
@@ -284,7 +284,7 @@ check_args:
}
if (!found) {
- plt_err("Unsupported ocm_page_size = %d\n", cn10k_mldev->ocm_page_size);
+ plt_err("Unsupported ocm_page_size = %d", cn10k_mldev->ocm_page_size);
ret = -EINVAL;
goto exit;
}
@@ -773,7 +773,7 @@ cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev)
/* Read firmware image to a buffer */
ret = rte_firmware_read(fw->path, &fw_buffer, &fw_size);
if ((ret < 0) || (fw_buffer == NULL)) {
- plt_err("Unable to read firmware data: %s\n", fw->path);
+ plt_err("Unable to read firmware data: %s", fw->path);
return ret;
}
diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c
index 971362b242..7bd73727e1 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.c
+++ b/drivers/ml/cnxk/cnxk_ml_ops.c
@@ -437,7 +437,7 @@ cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id,
model = cnxk_mldev->mldev->data->models[model_id];
if (model == NULL) {
- plt_err("Invalid model_id = %d\n", model_id);
+ plt_err("Invalid model_id = %d", model_id);
return -EINVAL;
}
}
@@ -454,7 +454,7 @@ cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id,
} else {
for (j = 0; j < nb_ids; j++) {
if (stat_ids[j] < start_id || stat_ids[j] > end_id) {
- plt_err("Invalid stat_ids[%d] = %d for model_id = %d\n", j,
+ plt_err("Invalid stat_ids[%d] = %d for model_id = %d", j,
stat_ids[j], lcl_model_id);
return -EINVAL;
}
@@ -510,12 +510,12 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co
cnxk_ml_dev_info_get(dev, &dev_info);
if (conf->nb_models > dev_info.max_models) {
- plt_err("Invalid device config, nb_models > %u\n", dev_info.max_models);
+ plt_err("Invalid device config, nb_models > %u", dev_info.max_models);
return -EINVAL;
}
if (conf->nb_queue_pairs > dev_info.max_queue_pairs) {
- plt_err("Invalid device config, nb_queue_pairs > %u\n", dev_info.max_queue_pairs);
+ plt_err("Invalid device config, nb_queue_pairs > %u", dev_info.max_queue_pairs);
return -EINVAL;
}
@@ -533,10 +533,10 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co
plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u",
conf->nb_queue_pairs, conf->nb_models);
} else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_STARTED) {
- plt_err("Device can't be reconfigured in started state\n");
+ plt_err("Device can't be reconfigured in started state");
return -ENOTSUP;
} else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CLOSED) {
- plt_err("Device can't be reconfigured after close\n");
+ plt_err("Device can't be reconfigured after close");
return -ENOTSUP;
}
@@ -853,7 +853,7 @@ cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id,
uint32_t nb_desc;
if (queue_pair_id >= dev->data->nb_queue_pairs) {
- plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)\n", queue_pair_id,
+ plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)", queue_pair_id,
dev->data->nb_queue_pairs);
return -EINVAL;
}
@@ -1249,11 +1249,11 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u
}
if ((total_wb_pages + max_scratch_pages) > ocm->num_pages) {
- plt_err("model_id = %u: total_wb_pages (%u) + scratch_pages (%u) > %u\n",
+ plt_err("model_id = %u: total_wb_pages (%u) + scratch_pages (%u) > %u",
lcl_model_id, total_wb_pages, max_scratch_pages, ocm->num_pages);
if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
- plt_ml_dbg("layer_id = %u: wb_pages = %u, scratch_pages = %u\n", layer_id,
+ plt_ml_dbg("layer_id = %u: wb_pages = %u, scratch_pages = %u", layer_id,
model->layer[layer_id].glow.ocm_map.wb_pages,
model->layer[layer_id].glow.ocm_map.scratch_pages);
#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM
@@ -1262,7 +1262,7 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u
layer_id++) {
if (model->layer[layer_id].type == ML_CNXK_LAYER_TYPE_MRVL) {
plt_ml_dbg(
- "layer_id = %u: wb_pages = %u, scratch_pages = %u\n",
+ "layer_id = %u: wb_pages = %u, scratch_pages = %u",
layer_id,
model->layer[layer_id].glow.ocm_map.wb_pages,
model->layer[layer_id].glow.ocm_map.scratch_pages);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index cb6f8141a8..0f367faad5 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -359,13 +359,13 @@ atl_rx_init(struct rte_eth_dev *eth_dev)
buff_size = RTE_ALIGN_FLOOR(buff_size, 1024);
if (buff_size > HW_ATL_B0_RXD_BUF_SIZE_MAX) {
PMD_INIT_LOG(WARNING,
- "Port %d queue %d: mem pool buff size is too big\n",
+ "Port %d queue %d: mem pool buff size is too big",
rxq->port_id, rxq->queue_id);
buff_size = HW_ATL_B0_RXD_BUF_SIZE_MAX;
}
if (buff_size < 1024) {
PMD_INIT_LOG(ERR,
- "Port %d queue %d: mem pool buff size is too small\n",
+ "Port %d queue %d: mem pool buff size is too small",
rxq->port_id, rxq->queue_id);
return -EINVAL;
}
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
index 84d11ab3a5..06d79115b9 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
@@ -76,7 +76,7 @@ int hw_atl_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops)
self->fw_ver_actual) == 0) {
*fw_ops = &aq_fw_2x_ops;
} else {
- PMD_DRV_LOG(ERR, "Bad FW version detected: %x\n",
+ PMD_DRV_LOG(ERR, "Bad FW version detected: %x",
self->fw_ver_actual);
return -EOPNOTSUPP;
}
@@ -124,7 +124,7 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
AQ_HW_SLEEP(10);
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "MAC kickstart failed\n");
+ PMD_DRV_LOG(ERR, "MAC kickstart failed");
return -EIO;
}
@@ -152,7 +152,7 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
AQ_HW_SLEEP(10);
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "FW kickstart failed\n");
+ PMD_DRV_LOG(ERR, "FW kickstart failed");
return -EIO;
}
/* Old FW requires fixed delay after init */
@@ -209,7 +209,7 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
aq_hw_write_reg(self, 0x534, 0xA0);
if (rbl_status == 0xF1A7) {
- PMD_DRV_LOG(ERR, "No FW detected. Dynamic FW load not implemented\n");
+ PMD_DRV_LOG(ERR, "No FW detected. Dynamic FW load not implemented");
return -EOPNOTSUPP;
}
@@ -221,7 +221,7 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
AQ_HW_SLEEP(10);
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "FW kickstart failed\n");
+ PMD_DRV_LOG(ERR, "FW kickstart failed");
return -EIO;
}
/* Old FW requires fixed delay after init */
@@ -246,7 +246,7 @@ int hw_atl_utils_soft_reset(struct aq_hw_s *self)
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "Neither RBL nor FLB firmware started\n");
+ PMD_DRV_LOG(ERR, "Neither RBL nor FLB firmware started");
return -EOPNOTSUPP;
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 6ce87f83f4..da45ebf45f 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1352,7 +1352,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
tc_num = pdata->pfc_map[pfc_conf->priority];
if (pfc_conf->priority >= pdata->hw_feat.tc_cnt) {
- PMD_INIT_LOG(ERR, "Max supported traffic class: %d\n",
+ PMD_INIT_LOG(ERR, "Max supported traffic class: %d",
pdata->hw_feat.tc_cnt);
return -EINVAL;
}
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 597ee43359..3153cc4d80 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -8124,7 +8124,7 @@ static int bnx2x_get_shmem_info(struct bnx2x_softc *sc)
val = sc->devinfo.bc_ver >> 8;
if (val < BNX2X_BC_VER) {
/* for now only warn later we might need to enforce this */
- PMD_DRV_LOG(NOTICE, sc, "This driver needs bc_ver %X but found %X, please upgrade BC\n",
+ PMD_DRV_LOG(NOTICE, sc, "This driver needs bc_ver %X but found %X, please upgrade BC",
BNX2X_BC_VER, val);
}
sc->link_params.feature_config_flags |=
@@ -9489,16 +9489,16 @@ static int bnx2x_prev_unload(struct bnx2x_softc *sc)
hw_lock_val = (REG_RD(sc, hw_lock_reg));
if (hw_lock_val) {
if (hw_lock_val & HW_LOCK_RESOURCE_NVRAM) {
- PMD_DRV_LOG(DEBUG, sc, "Releasing previously held NVRAM lock\n");
+ PMD_DRV_LOG(DEBUG, sc, "Releasing previously held NVRAM lock");
REG_WR(sc, MCP_REG_MCPR_NVM_SW_ARB,
(MCPR_NVM_SW_ARB_ARB_REQ_CLR1 << SC_PORT(sc)));
}
- PMD_DRV_LOG(DEBUG, sc, "Releasing previously held HW lock\n");
+ PMD_DRV_LOG(DEBUG, sc, "Releasing previously held HW lock");
REG_WR(sc, hw_lock_reg, 0xffffffff);
}
if (MCPR_ACCESS_LOCK_LOCK & REG_RD(sc, MCP_REG_MCPR_ACCESS_LOCK)) {
- PMD_DRV_LOG(DEBUG, sc, "Releasing previously held ALR\n");
+ PMD_DRV_LOG(DEBUG, sc, "Releasing previously held ALR");
REG_WR(sc, MCP_REG_MCPR_ACCESS_LOCK, 0);
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 06c21ebe6d..3cca8a07f3 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -702,7 +702,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t member_id)
ret = rte_eth_link_get_nowait(members[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Member (port %u) link get failed: %s\n",
+ "Member (port %u) link get failed: %s",
members[i], rte_strerror(-ret));
continue;
}
@@ -879,7 +879,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
ret = rte_eth_link_get_nowait(member_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Member (port %u) link get failed: %s\n",
+ "Member (port %u) link get failed: %s",
member_id, rte_strerror(-ret));
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 56945e2349..253f38da4a 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -60,7 +60,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
0, data_size, socket_id);
if (internals->mode6.mempool == NULL) {
- RTE_BOND_LOG(ERR, "%s: Failed to initialize ALB mempool.\n",
+ RTE_BOND_LOG(ERR, "%s: Failed to initialize ALB mempool.",
bond_dev->device->name);
goto mempool_alloc_error;
}
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 99e496556a..ffc1322047 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -482,7 +482,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
ret = rte_eth_dev_info_get(member_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__, member_port_id, strerror(-ret));
return ret;
@@ -609,7 +609,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
&bonding_eth_dev->data->port_id);
internals->member_count--;
RTE_BOND_LOG(ERR,
- "Member (port %u) link get failed: %s\n",
+ "Member (port %u) link get failed: %s",
member_port_id, rte_strerror(-ret));
return -1;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index c40d18d128..4144c86be4 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -191,7 +191,7 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
ret = rte_eth_dev_info_get(member_port, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__, member_port, strerror(-ret));
return ret;
@@ -221,7 +221,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__, bond_dev->data->port_id,
strerror(-ret));
@@ -2289,7 +2289,7 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ret = rte_eth_dev_info_get(member.port_id, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__,
member.port_id,
strerror(-ret));
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index c841b31051..60baf806ab 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -582,7 +582,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
}
if (mp == NULL || mp[0] == NULL || mp[1] == NULL) {
- plt_err("invalid memory pools\n");
+ plt_err("invalid memory pools");
return -EINVAL;
}
@@ -610,7 +610,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
return -EINVAL;
}
- plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u\n", (*spb_pool)->name,
+ plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u", (*spb_pool)->name,
(*lpb_pool)->name, (*lpb_pool)->elt_size, (*spb_pool)->elt_size);
return 0;
diff --git a/drivers/net/cnxk/cnxk_ethdev_mcs.c b/drivers/net/cnxk/cnxk_ethdev_mcs.c
index 06ef7c98f3..119060bcf3 100644
--- a/drivers/net/cnxk/cnxk_ethdev_mcs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_mcs.c
@@ -568,17 +568,17 @@ cnxk_eth_macsec_session_stats_get(struct cnxk_eth_dev *dev, struct cnxk_macsec_s
req.id = sess->flow_id;
req.dir = sess->dir;
roc_mcs_flowid_stats_get(mcs_dev->mdev, &req, &flow_stats);
- plt_nix_dbg("\n******* FLOW_ID IDX[%u] STATS dir: %u********\n", sess->flow_id, sess->dir);
- plt_nix_dbg("TX: tcam_hit_cnt: 0x%" PRIx64 "\n", flow_stats.tcam_hit_cnt);
+ plt_nix_dbg("******* FLOW_ID IDX[%u] STATS dir: %u********", sess->flow_id, sess->dir);
+ plt_nix_dbg("TX: tcam_hit_cnt: 0x%" PRIx64, flow_stats.tcam_hit_cnt);
req.id = mcs_dev->port_id;
req.dir = sess->dir;
roc_mcs_port_stats_get(mcs_dev->mdev, &req, &port_stats);
- plt_nix_dbg("\n********** PORT[0] STATS ****************\n");
- plt_nix_dbg("RX tcam_miss_cnt: 0x%" PRIx64 "\n", port_stats.tcam_miss_cnt);
- plt_nix_dbg("RX parser_err_cnt: 0x%" PRIx64 "\n", port_stats.parser_err_cnt);
- plt_nix_dbg("RX preempt_err_cnt: 0x%" PRIx64 "\n", port_stats.preempt_err_cnt);
- plt_nix_dbg("RX sectag_insert_err_cnt: 0x%" PRIx64 "\n", port_stats.sectag_insert_err_cnt);
+ plt_nix_dbg("********** PORT[0] STATS ****************");
+ plt_nix_dbg("RX tcam_miss_cnt: 0x%" PRIx64, port_stats.tcam_miss_cnt);
+ plt_nix_dbg("RX parser_err_cnt: 0x%" PRIx64, port_stats.parser_err_cnt);
+ plt_nix_dbg("RX preempt_err_cnt: 0x%" PRIx64, port_stats.preempt_err_cnt);
+ plt_nix_dbg("RX sectag_insert_err_cnt: 0x%" PRIx64, port_stats.sectag_insert_err_cnt);
req.id = sess->secy_id;
req.dir = sess->dir;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index c8f4848f92..89e00f8fc7 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -528,7 +528,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
/* Wait for sq entries to be flushed */
rc = roc_nix_tm_sq_flush_spin(sq);
if (rc) {
- plt_err("Failed to drain sq, rc=%d\n", rc);
+ plt_err("Failed to drain sq, rc=%d", rc);
goto exit;
}
if (data->tx_queue_state[i] == RTE_ETH_QUEUE_STATE_STARTED) {
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 40569ddc6f..011229a470 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -2020,7 +2020,7 @@ cpfl_metadata_write_port_id(struct cpfl_itf *itf)
dev_id = cpfl_get_port_id(itf);
if (dev_id == CPFL_INVALID_HW_ID) {
- PMD_DRV_LOG(ERR, "fail to get hw ID\n");
+ PMD_DRV_LOG(ERR, "fail to get hw ID");
return false;
}
cpfl_metadata_write16(&itf->adapter->meta, type, offset, dev_id << 3);
diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp_rule.c
index be34da9fa2..42553c9641 100644
--- a/drivers/net/cpfl/cpfl_fxp_rule.c
+++ b/drivers/net/cpfl/cpfl_fxp_rule.c
@@ -77,7 +77,7 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m
if (ret && ret != CPFL_ERR_CTLQ_NO_WORK && ret != CPFL_ERR_CTLQ_ERROR &&
ret != CPFL_ERR_CTLQ_EMPTY) {
- PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x\n", ret);
+ PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x", ret);
retries++;
continue;
}
@@ -108,7 +108,7 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m
buff_cnt = dma ? 1 : 0;
ret = cpfl_vport_ctlq_post_rx_buffs(hw, cq, &buff_cnt, &dma);
if (ret)
- PMD_INIT_LOG(WARNING, "could not posted recv bufs\n");
+ PMD_INIT_LOG(WARNING, "could not posted recv bufs");
}
break;
}
@@ -131,7 +131,7 @@ cpfl_mod_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
/* prepare rule blob */
if (!dma->va) {
- PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", __func__);
+ PMD_INIT_LOG(ERR, "dma mem passed to %s is null", __func__);
return -1;
}
blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
@@ -176,7 +176,7 @@ cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
uint16_t cfg_ctrl;
if (!dma->va) {
- PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", __func__);
+ PMD_INIT_LOG(ERR, "dma mem passed to %s is null", __func__);
return -1;
}
blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 8e610b6bba..c5b1f161fd 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -728,7 +728,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
total_nb_rx_desc += nb_rx_desc;
if (total_nb_rx_desc > MAX_NB_RX_DESC) {
- DPAA2_PMD_WARN("\nTotal nb_rx_desc exceeds %d limit. Please use Normal buffers",
+ DPAA2_PMD_WARN("Total nb_rx_desc exceeds %d limit. Please use Normal buffers",
MAX_NB_RX_DESC);
DPAA2_PMD_WARN("To use Normal buffers, run 'export DPNI_NORMAL_BUF=1' before running dynamic_dpl.sh script");
}
@@ -1063,7 +1063,7 @@ dpaa2_dev_rx_queue_count(void *rx_queue)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return -EINVAL;
}
@@ -1933,7 +1933,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
if (ret == -1)
DPAA2_PMD_DEBUG("No change in status");
else
- DPAA2_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+ DPAA2_PMD_INFO("Port %d Link is %s", dev->data->port_id,
link.link_status ? "Up" : "Down");
return ret;
@@ -2307,7 +2307,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
dpaa2_ethq->tc_index, flow_id,
OPR_OPT_CREATE, &ocfg, 0);
if (ret) {
- DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
+ DPAA2_PMD_ERR("Error setting opr: ret: %d", ret);
return ret;
}
@@ -2423,7 +2423,7 @@ rte_pmd_dpaa2_thread_init(void)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return;
}
@@ -2838,7 +2838,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
WRIOP_SS_INITIALIZER(priv);
ret = dpaa2_eth_load_wriop_soft_parser(priv, DPNI_SS_INGRESS);
if (ret < 0) {
- DPAA2_PMD_ERR(" Error(%d) in loading softparser\n",
+ DPAA2_PMD_ERR(" Error(%d) in loading softparser",
ret);
return ret;
}
@@ -2846,7 +2846,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
ret = dpaa2_eth_enable_wriop_soft_parser(priv,
DPNI_SS_INGRESS);
if (ret < 0) {
- DPAA2_PMD_ERR(" Error(%d) in enabling softparser\n",
+ DPAA2_PMD_ERR(" Error(%d) in enabling softparser",
ret);
return ret;
}
@@ -2929,7 +2929,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
DPAA2_MAX_SGS * sizeof(struct qbman_sge),
rte_socket_id());
if (dpaa2_tx_sg_pool == NULL) {
- DPAA2_PMD_ERR("SG pool creation failed\n");
+ DPAA2_PMD_ERR("SG pool creation failed");
return -ENOMEM;
}
}
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e60650..e590f6f748 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3360,7 +3360,7 @@ dpaa2_flow_verify_action(
rxq = priv->rx_vq[rss_conf->queue[i]];
if (rxq->tc_index != attr->group) {
DPAA2_PMD_ERR(
- "Queue/Group combination are not supported\n");
+ "Queue/Group combination are not supported");
return -ENOTSUP;
}
}
@@ -3601,7 +3601,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS QoS table can not be configured(%d)\n",
+ "RSS QoS table can not be configured(%d)",
ret);
return -1;
}
@@ -3718,14 +3718,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
&priv->extract.tc_key_extract[flow->tc_id].dpkg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "unable to set flow distribution.please check queue config\n");
+ "unable to set flow distribution.please check queue config");
return ret;
}
/* Allocate DMA'ble memory to write the rules */
param = (size_t)rte_malloc(NULL, 256, 64);
if (!param) {
- DPAA2_PMD_ERR("Memory allocation failure\n");
+ DPAA2_PMD_ERR("Memory allocation failure");
return -1;
}
@@ -3747,7 +3747,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS TC table cannot be configured: %d\n",
+ "RSS TC table cannot be configured: %d",
ret);
rte_free((void *)param);
return -1;
@@ -3772,7 +3772,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS QoS dist can't be configured-%d\n",
+ "RSS QoS dist can't be configured-%d",
ret);
return -1;
}
@@ -3841,20 +3841,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
int ret = 0;
if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
- DPAA2_PMD_ERR("Priority group is out of range\n");
+ DPAA2_PMD_ERR("Priority group is out of range");
ret = -ENOTSUP;
}
if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
- DPAA2_PMD_ERR("Priority within the group is out of range\n");
+ DPAA2_PMD_ERR("Priority within the group is out of range");
ret = -ENOTSUP;
}
if (unlikely(attr->egress)) {
DPAA2_PMD_ERR(
- "Flow configuration is not supported on egress side\n");
+ "Flow configuration is not supported on egress side");
ret = -ENOTSUP;
}
if (unlikely(!attr->ingress)) {
- DPAA2_PMD_ERR("Ingress flag must be configured\n");
+ DPAA2_PMD_ERR("Ingress flag must be configured");
ret = -EINVAL;
}
return ret;
@@ -3933,7 +3933,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Failure to get dpni@%p attribute, err code %d\n",
+ "Failure to get dpni@%p attribute, err code %d",
dpni, ret);
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ATTR,
@@ -3945,7 +3945,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Invalid attributes are given\n");
+ "Invalid attributes are given");
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ATTR,
flow_attr, "invalid");
@@ -3955,7 +3955,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpaa2_dev_verify_patterns(pattern);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Invalid pattern list is given\n");
+ "Invalid pattern list is given");
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "invalid");
@@ -3965,7 +3965,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpaa2_dev_verify_actions(actions);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Invalid action list is given\n");
+ "Invalid action list is given");
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ACTION,
actions, "invalid");
@@ -4012,13 +4012,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!key_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!mask_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
@@ -4029,13 +4029,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!key_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!mask_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 2ff1a98fda..7dd5a60966 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -88,7 +88,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
(2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto creation_error;
}
key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
@@ -211,7 +211,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
- DPAA2_PMD_ERR("Invalid destination id\n");
+ DPAA2_PMD_ERR("Invalid destination id");
goto creation_error;
}
dpdmux_action.dest_if = vf_conf->id;
diff --git a/drivers/net/dpaa2/dpaa2_recycle.c b/drivers/net/dpaa2/dpaa2_recycle.c
index fbfdf360d1..4fde9b95a0 100644
--- a/drivers/net/dpaa2/dpaa2_recycle.c
+++ b/drivers/net/dpaa2/dpaa2_recycle.c
@@ -423,7 +423,7 @@ ls_mac_serdes_lpbk_support(uint16_t mac_id,
sd_idx = ls_serdes_cfg_to_idx(sd_cfg, sd_id);
if (sd_idx < 0) {
- DPAA2_PMD_ERR("Serdes protocol(0x%02x) does not exist\n",
+ DPAA2_PMD_ERR("Serdes protocol(0x%02x) does not exist",
sd_cfg);
return false;
}
@@ -552,7 +552,7 @@ ls_serdes_eth_lpbk(uint16_t mac_id, int en)
(serdes_id - LSX_SERDES_1) * 0x10000,
sizeof(struct ccsr_ls_serdes) / 64 * 64 + 64);
if (!serdes_base) {
- DPAA2_PMD_ERR("Serdes register map failed\n");
+ DPAA2_PMD_ERR("Serdes register map failed");
return -ENOMEM;
}
@@ -587,7 +587,7 @@ lx_serdes_eth_lpbk(uint16_t mac_id, int en)
(serdes_id - LSX_SERDES_1) * 0x10000,
sizeof(struct ccsr_lx_serdes) / 64 * 64 + 64);
if (!serdes_base) {
- DPAA2_PMD_ERR("Serdes register map failed\n");
+ DPAA2_PMD_ERR("Serdes register map failed");
return -ENOMEM;
}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 23f7c4132d..b64232b88f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -640,7 +640,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
ret = dpaa2_affine_qbman_swp();
if (ret) {
- DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d\n",
+ DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d",
rte_gettid());
return;
}
@@ -691,7 +691,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
hw_annot_addr = (void *)((size_t)v_addr + DPAA2_FD_PTA_SIZE);
fas = hw_annot_addr;
- DPAA2_PMD_ERR("\n\n[%d] error packet on port[%d]:"
+ DPAA2_PMD_ERR("[%d] error packet on port[%d]:"
" fd_off: %d, fd_err: %x, fas_status: %x",
rte_lcore_id(), eth_data->port_id,
DPAA2_GET_FD_OFFSET(fd), DPAA2_GET_FD_ERR(fd),
@@ -976,7 +976,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1107,7 +1107,7 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1256,7 +1256,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1573,7 +1573,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1747,7 +1747,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 63463c4fbf..eb649fb063 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -165,7 +165,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
addr = rte_malloc(NULL, sp_param.size, 64);
if (!addr) {
- DPAA2_PMD_ERR("Memory unavailable for soft parser param\n");
+ DPAA2_PMD_ERR("Memory unavailable for soft parser param");
return -1;
}
@@ -174,7 +174,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
if (ret) {
- DPAA2_PMD_ERR("dpni_load_sw_sequence failed\n");
+ DPAA2_PMD_ERR("dpni_load_sw_sequence failed");
rte_free(addr);
return ret;
}
@@ -214,7 +214,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
if (cfg.param_size) {
param_addr = rte_malloc(NULL, cfg.param_size, 64);
if (!param_addr) {
- DPAA2_PMD_ERR("Memory unavailable for soft parser param\n");
+ DPAA2_PMD_ERR("Memory unavailable for soft parser param");
return -1;
}
@@ -227,7 +227,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
if (ret) {
- DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d\n",
+ DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d",
priv->hw_id);
rte_free(param_addr);
return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 8fe5bfa013..3c0f282ec3 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -584,7 +584,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
return -1;
}
- DPAA2_PMD_DEBUG("tc_id = %d, channel = %d\n\n", tc_id,
+ DPAA2_PMD_DEBUG("tc_id = %d, channel = %d", tc_id,
node->parent->channel_id);
ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
((node->parent->channel_id << 8) | tc_id),
@@ -653,7 +653,7 @@ dpaa2_tm_sort_and_configure(struct rte_eth_dev *dev,
int i;
if (n == 1) {
- DPAA2_PMD_DEBUG("node id = %d\n, priority = %d, index = %d\n",
+ DPAA2_PMD_DEBUG("node id = %d, priority = %d, index = %d",
nodes[n - 1]->id, nodes[n - 1]->priority,
n - 1);
dpaa2_tm_configure_queue(dev, nodes[n - 1]);
@@ -669,7 +669,7 @@ dpaa2_tm_sort_and_configure(struct rte_eth_dev *dev,
}
dpaa2_tm_sort_and_configure(dev, nodes, n - 1);
- DPAA2_PMD_DEBUG("node id = %d\n, priority = %d, index = %d\n",
+ DPAA2_PMD_DEBUG("node id = %d, priority = %d, index = %d",
nodes[n - 1]->id, nodes[n - 1]->priority,
n - 1);
dpaa2_tm_configure_queue(dev, nodes[n - 1]);
@@ -709,7 +709,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
}
}
if (i > 0) {
- DPAA2_PMD_DEBUG("Configure queues\n");
+ DPAA2_PMD_DEBUG("Configure queues");
dpaa2_tm_sort_and_configure(dev, nodes, i);
}
}
@@ -733,13 +733,13 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
node->profile->params.peak.rate / (1024 * 1024);
/* root node */
if (node->parent == NULL) {
- DPAA2_PMD_DEBUG("LNI S.rate = %u, burst =%u\n",
+ DPAA2_PMD_DEBUG("LNI S.rate = %u, burst =%u",
tx_cr_shaper.rate_limit,
tx_cr_shaper.max_burst_size);
param = 0x2;
param |= node->profile->params.pkt_length_adjust << 16;
} else {
- DPAA2_PMD_DEBUG("Channel = %d S.rate = %u\n",
+ DPAA2_PMD_DEBUG("Channel = %d S.rate = %u",
node->channel_id,
tx_cr_shaper.rate_limit);
param = (node->channel_id << 8);
@@ -871,15 +871,15 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
"Scheduling Failed\n");
goto out;
}
- DPAA2_PMD_DEBUG("########################################\n");
- DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
+ DPAA2_PMD_DEBUG("########################################");
+ DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
for (t = 0; t < DPNI_MAX_TC; t++) {
DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
- DPAA2_PMD_DEBUG("delta = %d\n", prio_cfg.tc_sched[t].delta_bandwidth);
+ DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
}
- DPAA2_PMD_DEBUG("prioritya = %d\n", prio_cfg.prio_group_A);
- DPAA2_PMD_DEBUG("priorityb = %d\n", prio_cfg.prio_group_B);
- DPAA2_PMD_DEBUG("separate grps = %d\n\n", prio_cfg.separate_groups);
+ DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
+ DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
+ DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
}
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 8858f975f8..d64a1aedd3 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -5053,7 +5053,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR,
"Address change required to access page 0xA2, "
"but not supported. Please report the module "
- "type to the driver maintainers.\n");
+ "type to the driver maintainers.");
page_swap = true;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index c9352f0746..d8c30ef150 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -150,7 +150,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
char buf[RTE_ETHER_ADDR_FMT_SIZE];
rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
- ENETC_PMD_NOTICE("%s%s\n", name, buf);
+ ENETC_PMD_NOTICE("%s%s", name, buf);
}
static int
@@ -197,7 +197,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
char *first_byte;
ENETC_PMD_NOTICE("MAC is not available for this SI, "
- "set random MAC\n");
+ "set random MAC");
mac = (uint32_t *)hw->mac.addr;
*mac = (uint32_t)rte_rand();
first_byte = (char *)mac;
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 898aad1c37..8c7067fbb5 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -253,7 +253,7 @@ enetfec_eth_link_update(struct rte_eth_dev *dev,
link.link_status = lstatus;
link.link_speed = RTE_ETH_SPEED_NUM_1G;
- ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ ENETFEC_PMD_INFO("Port (%d) link is %s", dev->data->port_id,
"Up");
return rte_eth_linkstatus_set(dev, &link);
@@ -462,7 +462,7 @@ enetfec_rx_queue_setup(struct rte_eth_dev *dev,
}
if (queue_idx >= ENETFEC_MAX_Q) {
- ENETFEC_PMD_ERR("Invalid queue id %" PRIu16 ", max %d\n",
+ ENETFEC_PMD_ERR("Invalid queue id %" PRIu16 ", max %d",
queue_idx, ENETFEC_MAX_Q);
return -EINVAL;
}
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
index 6539cbb354..9f4e896985 100644
--- a/drivers/net/enetfec/enet_uio.c
+++ b/drivers/net/enetfec/enet_uio.c
@@ -177,7 +177,7 @@ config_enetfec_uio(struct enetfec_private *fep)
/* Mapping is done only one time */
if (enetfec_count > 0) {
- ENETFEC_PMD_INFO("Mapped!\n");
+ ENETFEC_PMD_INFO("Mapped!");
return 0;
}
@@ -191,7 +191,7 @@ config_enetfec_uio(struct enetfec_private *fep)
/* Open device file */
uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
if (uio_job->uio_fd < 0) {
- ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file");
return -1;
}
@@ -230,7 +230,7 @@ enetfec_configure(void)
d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
if (d == NULL) {
- ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ ENETFEC_PMD_ERR("Error opening directory '%s': %s",
FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
return -1;
}
@@ -249,7 +249,7 @@ enetfec_configure(void)
ret = sscanf(dir->d_name + strlen("uio"), "%d",
&uio_minor_number);
if (ret < 0)
- ENETFEC_PMD_ERR("Error: not find minor number\n");
+ ENETFEC_PMD_ERR("Error: not find minor number");
/*
* Open file uioX/name and read first line which
* contains the name for the device. Based on the
@@ -259,7 +259,7 @@ enetfec_configure(void)
ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
dir->d_name, "name", uio_name);
if (ret != 0) {
- ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ ENETFEC_PMD_INFO("file_read_first_line failed");
closedir(d);
return -1;
}
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index b04b6c9aa1..1121874346 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -670,7 +670,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, addr);
- ENICPMD_LOG(DEBUG, " %s address %s\n",
+ ENICPMD_LOG(DEBUG, " %s address %s",
add ? "add" : "remove", mac_str);
}
@@ -693,7 +693,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
rte_is_broadcast_ether_addr(addr)) {
rte_ether_format_addr(mac_str,
RTE_ETHER_ADDR_FMT_SIZE, addr);
- ENICPMD_LOG(ERR, " invalid multicast address %s\n",
+ ENICPMD_LOG(ERR, " invalid multicast address %s",
mac_str);
return -EINVAL;
}
@@ -701,7 +701,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
/* Flush all if requested */
if (nb_mc_addr == 0 || mc_addr_set == NULL) {
- ENICPMD_LOG(DEBUG, " flush multicast addresses\n");
+ ENICPMD_LOG(DEBUG, " flush multicast addresses");
for (i = 0; i < enic->mc_count; i++) {
addr = &enic->mc_addrs[i];
debug_log_add_del_addr(addr, false);
@@ -714,7 +714,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
}
if (nb_mc_addr > ENIC_MULTICAST_PERFECT_FILTERS) {
- ENICPMD_LOG(ERR, " too many multicast addresses: max=%d\n",
+ ENICPMD_LOG(ERR, " too many multicast addresses: max=%d",
ENIC_MULTICAST_PERFECT_FILTERS);
return -ENOSPC;
}
@@ -980,7 +980,7 @@ static int udp_tunnel_common_check(struct enic *enic,
tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
- ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
+ ENICPMD_LOG(DEBUG, " overlay offload is not supported");
return -ENOTSUP;
}
return 0;
@@ -993,10 +993,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
cfg = vxlan ? OVERLAY_CFG_VXLAN_PORT_UPDATE :
OVERLAY_CFG_GENEVE_PORT_UPDATE;
if (vnic_dev_overlay_offload_cfg(enic->vdev, cfg, port)) {
- ENICPMD_LOG(DEBUG, " failed to update tunnel port\n");
+ ENICPMD_LOG(DEBUG, " failed to update tunnel port");
return -EINVAL;
}
- ENICPMD_LOG(DEBUG, " updated %s port to %u\n",
+ ENICPMD_LOG(DEBUG, " updated %s port to %u",
vxlan ? "vxlan" : "geneve", port);
if (vxlan)
enic->vxlan_port = port;
@@ -1027,7 +1027,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
* "Adding" a new port number replaces it.
*/
if (tnl->udp_port == port || tnl->udp_port == 0) {
- ENICPMD_LOG(DEBUG, " %u is already configured or invalid\n",
+ ENICPMD_LOG(DEBUG, " %u is already configured or invalid",
tnl->udp_port);
return -EINVAL;
}
@@ -1059,7 +1059,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
* which is tied to inner RSS and TSO.
*/
if (tnl->udp_port != port) {
- ENICPMD_LOG(DEBUG, " %u is not a configured tunnel port\n",
+ ENICPMD_LOG(DEBUG, " %u is not a configured tunnel port",
tnl->udp_port);
return -EINVAL;
}
@@ -1323,7 +1323,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
}
if (eth_da.nb_representor_ports > 0 &&
eth_da.type != RTE_ETH_REPRESENTOR_VF) {
- ENICPMD_LOG(ERR, "unsupported representor type: %s\n",
+ ENICPMD_LOG(ERR, "unsupported representor type: %s",
pci_dev->device.devargs->args);
return -ENOTSUP;
}
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index e6c9ad442a..758000ea21 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -1351,14 +1351,14 @@ static void
enic_dump_actions(const struct filter_action_v2 *ea)
{
if (ea->type == FILTER_ACTION_RQ_STEERING) {
- ENICPMD_LOG(INFO, "Action(V1), queue: %u\n", ea->rq_idx);
+ ENICPMD_LOG(INFO, "Action(V1), queue: %u", ea->rq_idx);
} else if (ea->type == FILTER_ACTION_V2) {
- ENICPMD_LOG(INFO, "Actions(V2)\n");
+ ENICPMD_LOG(INFO, "Actions(V2)");
if (ea->flags & FILTER_ACTION_RQ_STEERING_FLAG)
- ENICPMD_LOG(INFO, "\tqueue: %u\n",
+ ENICPMD_LOG(INFO, "\tqueue: %u",
enic_sop_rq_idx_to_rte_idx(ea->rq_idx));
if (ea->flags & FILTER_ACTION_FILTER_ID_FLAG)
- ENICPMD_LOG(INFO, "\tfilter_id: %u\n", ea->filter_id);
+ ENICPMD_LOG(INFO, "\tfilter_id: %u", ea->filter_id);
}
}
@@ -1374,13 +1374,13 @@ enic_dump_filter(const struct filter_v2 *filt)
switch (filt->type) {
case FILTER_IPV4_5TUPLE:
- ENICPMD_LOG(INFO, "FILTER_IPV4_5TUPLE\n");
+ ENICPMD_LOG(INFO, "FILTER_IPV4_5TUPLE");
break;
case FILTER_USNIC_IP:
case FILTER_DPDK_1:
/* FIXME: this should be a loop */
gp = &filt->u.generic_1;
- ENICPMD_LOG(INFO, "Filter: vlan: 0x%04x, mask: 0x%04x\n",
+ ENICPMD_LOG(INFO, "Filter: vlan: 0x%04x, mask: 0x%04x",
gp->val_vlan, gp->mask_vlan);
if (gp->mask_flags & FILTER_GENERIC_1_IPV4)
@@ -1438,7 +1438,7 @@ enic_dump_filter(const struct filter_v2 *filt)
? "ipfrag(y)" : "ipfrag(n)");
else
sprintf(ipfrag, "%s ", "ipfrag(x)");
- ENICPMD_LOG(INFO, "\tFlags: %s%s%s%s%s%s%s%s\n", ip4, ip6, udp,
+ ENICPMD_LOG(INFO, "\tFlags: %s%s%s%s%s%s%s%s", ip4, ip6, udp,
tcp, tcpudp, ip4csum, l4csum, ipfrag);
for (i = 0; i < FILTER_GENERIC_1_NUM_LAYERS; i++) {
@@ -1455,7 +1455,7 @@ enic_dump_filter(const struct filter_v2 *filt)
bp += 2;
}
*bp = '\0';
- ENICPMD_LOG(INFO, "\tL%u mask: %s\n", i + 2, buf);
+ ENICPMD_LOG(INFO, "\tL%u mask: %s", i + 2, buf);
bp = buf;
for (j = 0; j <= mbyte; j++) {
sprintf(bp, "%02x",
@@ -1463,11 +1463,11 @@ enic_dump_filter(const struct filter_v2 *filt)
bp += 2;
}
*bp = '\0';
- ENICPMD_LOG(INFO, "\tL%u val: %s\n", i + 2, buf);
+ ENICPMD_LOG(INFO, "\tL%u val: %s", i + 2, buf);
}
break;
default:
- ENICPMD_LOG(INFO, "FILTER UNKNOWN\n");
+ ENICPMD_LOG(INFO, "FILTER UNKNOWN");
break;
}
}
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 5d8d29135c..8469e06de9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -64,7 +64,7 @@ static int enic_vf_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
/* Pass vf not pf because of cq index calculation. See enic_alloc_wq */
err = enic_alloc_wq(&vf->enic, queue_idx, socket_id, nb_desc);
if (err) {
- ENICPMD_LOG(ERR, "error in allocating wq\n");
+ ENICPMD_LOG(ERR, "error in allocating wq");
return err;
}
return 0;
@@ -104,7 +104,7 @@ static int enic_vf_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
ret = enic_alloc_rq(&vf->enic, queue_idx, socket_id, mp, nb_desc,
rx_conf->rx_free_thresh);
if (ret) {
- ENICPMD_LOG(ERR, "error in allocating rq\n");
+ ENICPMD_LOG(ERR, "error in allocating rq");
return ret;
}
return 0;
@@ -230,14 +230,14 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
/* enic_enable */
ret = enic_alloc_rx_queue_mbufs(pf, &pf->rq[index]);
if (ret) {
- ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs\n");
+ ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs");
return ret;
}
ret = enic_alloc_rx_queue_mbufs(pf, data_rq);
if (ret) {
/* Release the allocated mbufs for the sop rq*/
enic_rxmbuf_queue_release(pf, &pf->rq[index]);
- ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs\n");
+ ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs");
return ret;
}
enic_start_rq(pf, vf->pf_rq_sop_idx);
@@ -430,7 +430,7 @@ static int enic_vf_stats_get(struct rte_eth_dev *eth_dev,
/* Get VF stats via PF */
err = vnic_dev_stats_dump(vf->enic.vdev, &vs);
if (err) {
- ENICPMD_LOG(ERR, "error in getting stats\n");
+ ENICPMD_LOG(ERR, "error in getting stats");
return err;
}
stats->ipackets = vs->rx.rx_frames_ok;
@@ -453,7 +453,7 @@ static int enic_vf_stats_reset(struct rte_eth_dev *eth_dev)
/* Ask PF to clear VF stats */
err = vnic_dev_stats_clear(vf->enic.vdev);
if (err)
- ENICPMD_LOG(ERR, "error in clearing stats\n");
+ ENICPMD_LOG(ERR, "error in clearing stats");
return err;
}
@@ -581,7 +581,7 @@ static int get_vf_config(struct enic_vf_representor *vf)
/* VF MAC */
err = vnic_dev_get_mac_addr(vf->enic.vdev, vf->mac_addr.addr_bytes);
if (err) {
- ENICPMD_LOG(ERR, "error in getting MAC address\n");
+ ENICPMD_LOG(ERR, "error in getting MAC address");
return err;
}
rte_ether_addr_copy(&vf->mac_addr, vf->eth_dev->data->mac_addrs);
@@ -591,7 +591,7 @@ static int get_vf_config(struct enic_vf_representor *vf)
offsetof(struct vnic_enet_config, mtu),
sizeof(c->mtu), &c->mtu);
if (err) {
- ENICPMD_LOG(ERR, "error in getting MTU\n");
+ ENICPMD_LOG(ERR, "error in getting MTU");
return err;
}
/*
diff --git a/drivers/net/failsafe/failsafe_args.c b/drivers/net/failsafe/failsafe_args.c
index 3b867437d7..1b8f1d3050 100644
--- a/drivers/net/failsafe/failsafe_args.c
+++ b/drivers/net/failsafe/failsafe_args.c
@@ -406,7 +406,7 @@ failsafe_args_parse(struct rte_eth_dev *dev, const char *params)
kvlist = rte_kvargs_parse(mut_params,
pmd_failsafe_init_parameters);
if (kvlist == NULL) {
- ERROR("Error parsing parameters, usage:\n"
+ ERROR("Error parsing parameters, usage:"
PMD_FAILSAFE_PARAM_STRING);
return -1;
}
diff --git a/drivers/net/failsafe/failsafe_eal.c b/drivers/net/failsafe/failsafe_eal.c
index d71b512f81..e79d3b4120 100644
--- a/drivers/net/failsafe/failsafe_eal.c
+++ b/drivers/net/failsafe/failsafe_eal.c
@@ -16,7 +16,7 @@ fs_ethdev_portid_get(const char *name, uint16_t *port_id)
size_t len;
if (name == NULL) {
- DEBUG("Null pointer is specified\n");
+ DEBUG("Null pointer is specified");
return -EINVAL;
}
len = strlen(name);
diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c
index 031f3eb13f..dc4aba6e30 100644
--- a/drivers/net/failsafe/failsafe_ether.c
+++ b/drivers/net/failsafe/failsafe_ether.c
@@ -38,7 +38,7 @@ fs_flow_complain(struct rte_flow_error *error)
errstr = "unknown type";
else
errstr = errstrlist[error->type];
- ERROR("Caught error type %d (%s): %s%s\n",
+ ERROR("Caught error type %d (%s): %s%s",
error->type, errstr,
error->cause ? (snprintf(buf, sizeof(buf), "cause: %p, ",
error->cause), buf) : "",
@@ -640,7 +640,7 @@ failsafe_eth_new_event_callback(uint16_t port_id,
if (sdev->state >= DEV_PROBED)
continue;
if (dev->device == NULL) {
- WARN("Trying to probe malformed device %s.\n",
+ WARN("Trying to probe malformed device %s.",
sdev->devargs.name);
continue;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 969ded6ced..68b7310b85 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -173,17 +173,17 @@ fs_rx_event_proxy_service_install(struct fs_priv *priv)
/* run the service */
ret = rte_service_component_runstate_set(priv->rxp.sid, 1);
if (ret < 0) {
- ERROR("Failed Setting component runstate\n");
+ ERROR("Failed Setting component runstate");
return ret;
}
ret = rte_service_set_stats_enable(priv->rxp.sid, 1);
if (ret < 0) {
- ERROR("Failed enabling stats\n");
+ ERROR("Failed enabling stats");
return ret;
}
ret = rte_service_runstate_set(priv->rxp.sid, 1);
if (ret < 0) {
- ERROR("Failed to run service\n");
+ ERROR("Failed to run service");
return ret;
}
priv->rxp.sstate = SS_READY;
diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c
index 343bd13d67..438c0c5441 100644
--- a/drivers/net/gve/base/gve_adminq.c
+++ b/drivers/net/gve/base/gve_adminq.c
@@ -11,7 +11,7 @@
#define GVE_ADMINQ_SLEEP_LEN 20
#define GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK 100
-#define GVE_DEVICE_OPTION_ERROR_FMT "%s option error:\n Expected: length=%d, feature_mask=%x.\n Actual: length=%d, feature_mask=%x."
+#define GVE_DEVICE_OPTION_ERROR_FMT "%s option error: Expected: length=%d, feature_mask=%x. Actual: length=%d, feature_mask=%x."
#define GVE_DEVICE_OPTION_TOO_BIG_FMT "Length of %s option larger than expected. Possible older version of guest driver."
diff --git a/drivers/net/hinic/base/hinic_pmd_eqs.c b/drivers/net/hinic/base/hinic_pmd_eqs.c
index fecb653401..f0e1139a98 100644
--- a/drivers/net/hinic/base/hinic_pmd_eqs.c
+++ b/drivers/net/hinic/base/hinic_pmd_eqs.c
@@ -471,7 +471,7 @@ int hinic_comm_aeqs_init(struct hinic_hwdev *hwdev)
num_aeqs = HINIC_HWIF_NUM_AEQS(hwdev->hwif);
if (num_aeqs < HINIC_MIN_AEQS) {
- PMD_DRV_LOG(ERR, "PMD need %d AEQs, Chip has %d\n",
+ PMD_DRV_LOG(ERR, "PMD need %d AEQs, Chip has %d",
HINIC_MIN_AEQS, num_aeqs);
return -EINVAL;
}
diff --git a/drivers/net/hinic/base/hinic_pmd_mbox.c b/drivers/net/hinic/base/hinic_pmd_mbox.c
index 92a7cc1a11..a75a6953ad 100644
--- a/drivers/net/hinic/base/hinic_pmd_mbox.c
+++ b/drivers/net/hinic/base/hinic_pmd_mbox.c
@@ -310,7 +310,7 @@ static int mbox_msg_ack_aeqn(struct hinic_hwdev *hwdev)
/* This is used for ovs */
msg_ack_aeqn = HINIC_AEQN_1;
} else {
- PMD_DRV_LOG(ERR, "Warning: Invalid aeq num: %d\n", aeq_num);
+ PMD_DRV_LOG(ERR, "Warning: Invalid aeq num: %d", aeq_num);
msg_ack_aeqn = -1;
}
@@ -372,13 +372,13 @@ static int init_mbox_info(struct hinic_recv_mbox *mbox_info)
mbox_info->mbox = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
if (!mbox_info->mbox) {
- PMD_DRV_LOG(ERR, "Alloc mbox buf_in mem failed\n");
+ PMD_DRV_LOG(ERR, "Alloc mbox buf_in mem failed");
return -ENOMEM;
}
mbox_info->buf_out = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
if (!mbox_info->buf_out) {
- PMD_DRV_LOG(ERR, "Alloc mbox buf_out mem failed\n");
+ PMD_DRV_LOG(ERR, "Alloc mbox buf_out mem failed");
err = -ENOMEM;
goto alloc_buf_out_err;
}
diff --git a/drivers/net/hinic/base/hinic_pmd_niccfg.c b/drivers/net/hinic/base/hinic_pmd_niccfg.c
index 8c08d63286..a08020313f 100644
--- a/drivers/net/hinic/base/hinic_pmd_niccfg.c
+++ b/drivers/net/hinic/base/hinic_pmd_niccfg.c
@@ -683,7 +683,7 @@ int hinic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
&pause_info, sizeof(pause_info),
&pause_info, &out_size);
if (err || !out_size || pause_info.mgmt_msg_head.status) {
- PMD_DRV_LOG(ERR, "Failed to get pause info, err: %d, status: 0x%x, out size: 0x%x\n",
+ PMD_DRV_LOG(ERR, "Failed to get pause info, err: %d, status: 0x%x, out size: 0x%x",
err, pause_info.mgmt_msg_head.status, out_size);
return -EIO;
}
@@ -1332,7 +1332,7 @@ int hinic_get_mgmt_version(void *hwdev, char *fw)
&fw_ver, sizeof(fw_ver), &fw_ver,
&out_size);
if (err || !out_size || fw_ver.mgmt_msg_head.status) {
- PMD_DRV_LOG(ERR, "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x\n",
+ PMD_DRV_LOG(ERR, "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x",
err, fw_ver.mgmt_msg_head.status, out_size);
return -EIO;
}
@@ -1767,7 +1767,7 @@ int hinic_set_fdir_filter(void *hwdev, u8 filter_type, u8 qid, u8 type_enable,
&port_filer_cmd, &out_size);
if (err || !out_size || port_filer_cmd.mgmt_msg_head.status) {
PMD_DRV_LOG(ERR, "Set port Q filter failed, err: %d, status: 0x%x, out size: 0x%x, type: 0x%x,"
- " enable: 0x%x, qid: 0x%x, filter_type_enable: 0x%x\n",
+ " enable: 0x%x, qid: 0x%x, filter_type_enable: 0x%x",
err, port_filer_cmd.mgmt_msg_head.status, out_size,
filter_type, enable, qid, type_enable);
return -EIO;
@@ -1819,7 +1819,7 @@ int hinic_set_normal_filter(void *hwdev, u8 qid, u8 normal_type_enable,
&port_filer_cmd, &out_size);
if (err || !out_size || port_filer_cmd.mgmt_msg_head.status) {
PMD_DRV_LOG(ERR, "Set normal filter failed, err: %d, status: 0x%x, out size: 0x%x, fdir_flag: 0x%x,"
- " enable: 0x%x, qid: 0x%x, normal_type_enable: 0x%x, key:0x%x\n",
+ " enable: 0x%x, qid: 0x%x, normal_type_enable: 0x%x, key:0x%x",
err, port_filer_cmd.mgmt_msg_head.status, out_size,
flag, enable, qid, normal_type_enable, key);
return -EIO;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index d4978e0649..cb5c013b21 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1914,7 +1914,7 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_dev->nic_pause.rx_pause = nic_pause.rx_pause;
nic_dev->nic_pause.tx_pause = nic_pause.tx_pause;
- PMD_DRV_LOG(INFO, "Set pause options, tx: %s, rx: %s, auto: %s\n",
+ PMD_DRV_LOG(INFO, "Set pause options, tx: %s, rx: %s, auto: %s",
nic_pause.tx_pause ? "on" : "off",
nic_pause.rx_pause ? "on" : "off",
nic_pause.auto_neg ? "on" : "off");
@@ -2559,7 +2559,7 @@ static int hinic_pf_get_default_cos(struct hinic_hwdev *hwdev, u8 *cos_id)
valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.valid_cos_bitmap;
if (!valid_cos_bitmap) {
- PMD_DRV_LOG(ERR, "PF has none cos to support\n");
+ PMD_DRV_LOG(ERR, "PF has none cos to support");
return -EFAULT;
}
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c
index cb369be5be..a3b58e0a8f 100644
--- a/drivers/net/hns3/hns3_dump.c
+++ b/drivers/net/hns3/hns3_dump.c
@@ -242,7 +242,7 @@ hns3_get_rx_queue(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
rx_queues = dev->data->rx_queues;
if (rx_queues == NULL || rx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect rx_queues is NULL!\n");
+ hns3_err(hw, "detect rx_queues is NULL!");
return NULL;
}
@@ -267,7 +267,7 @@ hns3_get_tx_queue(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_tx_queues; queue_id++) {
tx_queues = dev->data->tx_queues;
if (tx_queues == NULL || tx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect tx_queues is NULL!\n");
+ hns3_err(hw, "detect tx_queues is NULL!");
return NULL;
}
@@ -297,7 +297,7 @@ hns3_get_rxtx_fake_queue_info(FILE *file, struct rte_eth_dev *dev)
if (dev->data->nb_rx_queues < dev->data->nb_tx_queues) {
rx_queues = hw->fkq_data.rx_queues;
if (rx_queues == NULL || rx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect rx_queues is NULL!\n");
+ hns3_err(hw, "detect rx_queues is NULL!");
return;
}
rxq = (struct hns3_rx_queue *)rx_queues[queue_id];
@@ -311,7 +311,7 @@ hns3_get_rxtx_fake_queue_info(FILE *file, struct rte_eth_dev *dev)
queue_id = 0;
if (tx_queues == NULL || tx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect tx_queues is NULL!\n");
+ hns3_err(hw, "detect tx_queues is NULL!");
return;
}
txq = (struct hns3_tx_queue *)tx_queues[queue_id];
@@ -961,7 +961,7 @@ hns3_rx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id,
return -EINVAL;
if (num > rxq->nb_rx_desc) {
- hns3_err(hw, "Invalid BD num=%u\n", num);
+ hns3_err(hw, "Invalid BD num=%u", num);
return -EINVAL;
}
@@ -1003,7 +1003,7 @@ hns3_tx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id,
return -EINVAL;
if (num > txq->nb_tx_desc) {
- hns3_err(hw, "Invalid BD num=%u\n", num);
+ hns3_err(hw, "Invalid BD num=%u", num);
return -EINVAL;
}
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 916bf30dcb..0b768ef140 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -1806,7 +1806,7 @@ enable_tm_err_intr(struct hns3_adapter *hns, bool en)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret)
- hns3_err(hw, "fail to %s TM QCN mem errors, ret = %d\n",
+ hns3_err(hw, "fail to %s TM QCN mem errors, ret = %d",
en ? "enable" : "disable", ret);
return ret;
@@ -1847,7 +1847,7 @@ enable_common_err_intr(struct hns3_adapter *hns, bool en)
ret = hns3_cmd_send(hw, &desc[0], RTE_DIM(desc));
if (ret)
- hns3_err(hw, "fail to %s common err interrupts, ret = %d\n",
+ hns3_err(hw, "fail to %s common err interrupts, ret = %d",
en ? "enable" : "disable", ret);
return ret;
@@ -1984,7 +1984,7 @@ query_num_bds(struct hns3_hw *hw, bool is_ras, uint32_t *mpf_bd_num,
pf_bd_num_val = rte_le_to_cpu_32(desc.data[1]);
if (mpf_bd_num_val < mpf_min_bd_num || pf_bd_num_val < pf_min_bd_num) {
hns3_err(hw, "error bd num: mpf(%u), min_mpf(%u), "
- "pf(%u), min_pf(%u)\n", mpf_bd_num_val, mpf_min_bd_num,
+ "pf(%u), min_pf(%u)", mpf_bd_num_val, mpf_min_bd_num,
pf_bd_num_val, pf_min_bd_num);
return -EINVAL;
}
@@ -2061,7 +2061,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
opcode = HNS3_OPC_QUERY_CLEAR_PF_RAS_INT;
break;
default:
- hns3_err(hw, "error hardware err_type = %d\n", err_type);
+ hns3_err(hw, "error hardware err_type = %d", err_type);
return -EINVAL;
}
@@ -2069,7 +2069,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
hns3_cmd_setup_basic_desc(&desc[0], opcode, true);
ret = hns3_cmd_send(hw, &desc[0], num);
if (ret) {
- hns3_err(hw, "query hw err int 0x%x cmd failed, ret = %d\n",
+ hns3_err(hw, "query hw err int 0x%x cmd failed, ret = %d",
opcode, ret);
return ret;
}
@@ -2097,7 +2097,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
hns3_cmd_reuse_desc(&desc[0], false);
ret = hns3_cmd_send(hw, &desc[0], num);
if (ret)
- hns3_err(hw, "clear all hw err int cmd failed, ret = %d\n",
+ hns3_err(hw, "clear all hw err int cmd failed, ret = %d",
ret);
return ret;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 894ac6dd71..c6e77d21cb 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -50,7 +50,7 @@ hns3_ptp_int_en(struct hns3_hw *hw, bool en)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret)
hns3_err(hw,
- "failed to %s ptp interrupt, ret = %d\n",
+ "failed to %s ptp interrupt, ret = %d",
en ? "enable" : "disable", ret);
return ret;
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index be1be6a89c..955bc7e3af 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -355,7 +355,7 @@ hns3_get_dfx_reg_bd_num(struct hns3_hw *hw, uint32_t *bd_num_list,
ret = hns3_cmd_send(hw, desc, HNS3_GET_DFX_REG_BD_NUM_SIZE);
if (ret) {
- hns3_err(hw, "fail to get dfx bd num, ret = %d.\n", ret);
+ hns3_err(hw, "fail to get dfx bd num, ret = %d.", ret);
return ret;
}
@@ -387,7 +387,7 @@ hns3_dfx_reg_cmd_send(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
ret = hns3_cmd_send(hw, desc, bd_num);
if (ret)
hns3_err(hw, "fail to query dfx registers, opcode = 0x%04X, "
- "ret = %d.\n", opcode, ret);
+ "ret = %d.", opcode, ret);
return ret;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ffc1f6d874..2b043cd693 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -653,7 +653,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
if (eth_da.nb_representor_ports > 0 &&
eth_da.type != RTE_ETH_REPRESENTOR_VF) {
- PMD_DRV_LOG(ERR, "unsupported representor type: %s\n",
+ PMD_DRV_LOG(ERR, "unsupported representor type: %s",
pci_dev->device.devargs->args);
return -ENOTSUP;
}
@@ -1480,10 +1480,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
val = I40E_READ_REG(hw, I40E_GL_FWSTS);
if (val & I40E_GL_FWSTS_FWS1B_MASK) {
- PMD_INIT_LOG(ERR, "\nERROR: "
- "Firmware recovery mode detected. Limiting functionality.\n"
- "Refer to the Intel(R) Ethernet Adapters and Devices "
- "User Guide for details on firmware recovery mode.");
+ PMD_INIT_LOG(ERR, "ERROR: Firmware recovery mode detected. Limiting functionality.");
return -EIO;
}
@@ -2222,7 +2219,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
status = i40e_aq_get_phy_capabilities(hw, false, true, &phy_ab,
NULL);
if (status) {
- PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
+ PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
status);
return ret;
}
@@ -2232,7 +2229,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
status = i40e_aq_get_phy_capabilities(hw, false, false, &phy_ab,
NULL);
if (status) {
- PMD_DRV_LOG(ERR, "Failed to get the current PHY config: %d\n",
+ PMD_DRV_LOG(ERR, "Failed to get the current PHY config: %d",
status);
return ret;
}
@@ -2257,7 +2254,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
* Warn users and config the default available speeds.
*/
if (is_up && !(force_speed & avail_speed)) {
- PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!\n");
+ PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!");
phy_conf.link_speed = avail_speed;
} else {
phy_conf.link_speed = is_up ? force_speed : avail_speed;
@@ -6814,7 +6811,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
I40E_GL_MDET_TX_QUEUE_SHIFT) -
hw->func_caps.base_queue;
PMD_DRV_LOG(WARNING, "Malicious Driver Detection event 0x%02x on TX "
- "queue %d PF number 0x%02x VF number 0x%02x device %s\n",
+ "queue %d PF number 0x%02x VF number 0x%02x device %s",
event, queue, pf_num, vf_num, dev->data->name);
I40E_WRITE_REG(hw, I40E_GL_MDET_TX, I40E_MDD_CLEAR32);
mdd_detected = true;
@@ -6830,7 +6827,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
hw->func_caps.base_queue;
PMD_DRV_LOG(WARNING, "Malicious Driver Detection event 0x%02x on RX "
- "queue %d of function 0x%02x device %s\n",
+ "queue %d of function 0x%02x device %s",
event, queue, func, dev->data->name);
I40E_WRITE_REG(hw, I40E_GL_MDET_RX, I40E_MDD_CLEAR32);
mdd_detected = true;
@@ -6840,13 +6837,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
reg = I40E_READ_REG(hw, I40E_PF_MDET_TX);
if (reg & I40E_PF_MDET_TX_VALID_MASK) {
I40E_WRITE_REG(hw, I40E_PF_MDET_TX, I40E_MDD_CLEAR16);
- PMD_DRV_LOG(WARNING, "TX driver issue detected on PF\n");
+ PMD_DRV_LOG(WARNING, "TX driver issue detected on PF");
}
reg = I40E_READ_REG(hw, I40E_PF_MDET_RX);
if (reg & I40E_PF_MDET_RX_VALID_MASK) {
I40E_WRITE_REG(hw, I40E_PF_MDET_RX,
I40E_MDD_CLEAR16);
- PMD_DRV_LOG(WARNING, "RX driver issue detected on PF\n");
+ PMD_DRV_LOG(WARNING, "RX driver issue detected on PF");
}
}
@@ -6859,7 +6856,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
I40E_MDD_CLEAR16);
vf->num_mdd_events++;
PMD_DRV_LOG(WARNING, "TX driver issue detected on VF %d %-"
- PRIu64 "times\n",
+ PRIu64 "times",
i, vf->num_mdd_events);
}
@@ -6869,7 +6866,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
I40E_MDD_CLEAR16);
vf->num_mdd_events++;
PMD_DRV_LOG(WARNING, "RX driver issue detected on VF %d %-"
- PRIu64 "times\n",
+ PRIu64 "times",
i, vf->num_mdd_events);
}
}
@@ -11304,7 +11301,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
if (!(hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE)) {
PMD_DRV_LOG(ERR,
"Module EEPROM memory read not supported. "
- "Please update the NVM image.\n");
+ "Please update the NVM image.");
return -EINVAL;
}
@@ -11315,7 +11312,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
if (hw->phy.link_info.phy_type == I40E_PHY_TYPE_EMPTY) {
PMD_DRV_LOG(ERR,
"Cannot read module EEPROM memory. "
- "No module connected.\n");
+ "No module connected.");
return -EINVAL;
}
@@ -11345,7 +11342,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
if (sff8472_swap & I40E_MODULE_SFF_ADDR_MODE) {
PMD_DRV_LOG(WARNING,
"Module address swap to access "
- "page 0xA2 is not supported.\n");
+ "page 0xA2 is not supported.");
modinfo->type = RTE_ETH_MODULE_SFF_8079;
modinfo->eeprom_len = RTE_ETH_MODULE_SFF_8079_LEN;
} else if (sff8472_comp == 0x00) {
@@ -11381,7 +11378,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
modinfo->eeprom_len = I40E_MODULE_QSFP_MAX_LEN;
break;
default:
- PMD_DRV_LOG(ERR, "Module type unrecognized\n");
+ PMD_DRV_LOG(ERR, "Module type unrecognized");
return -EINVAL;
}
return 0;
@@ -11683,7 +11680,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
name[strlen(name) - 1] = '\0';
- PMD_DRV_LOG(INFO, "name = %s\n", name);
+ PMD_DRV_LOG(INFO, "name = %s", name);
if (!strcmp(name, "GTPC"))
new_pctype =
i40e_find_customized_pctype(pf,
@@ -11827,7 +11824,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
continue;
memset(name, 0, sizeof(name));
strcpy(name, proto[n].name);
- PMD_DRV_LOG(INFO, "name = %s\n", name);
+ PMD_DRV_LOG(INFO, "name = %s", name);
if (!strncasecmp(name, "PPPOE", 5))
ptype_mapping[i].sw_ptype |=
RTE_PTYPE_L2_ETHER_PPPOE;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 15d9ff868f..4a47a8f7ee 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1280,17 +1280,17 @@ i40e_pf_host_process_cmd_request_queues(struct i40e_pf_vf *vf, uint8_t *msg)
req_pairs = i40e_align_floor(req_pairs) << 1;
if (req_pairs == 0) {
- PMD_DRV_LOG(ERR, "VF %d tried to request 0 queues. Ignoring.\n",
+ PMD_DRV_LOG(ERR, "VF %d tried to request 0 queues. Ignoring.",
vf->vf_idx);
} else if (req_pairs > I40E_MAX_QP_NUM_PER_VF) {
PMD_DRV_LOG(ERR,
- "VF %d tried to request more than %d queues.\n",
+ "VF %d tried to request more than %d queues.",
vf->vf_idx,
I40E_MAX_QP_NUM_PER_VF);
vfres->num_queue_pairs = I40E_MAX_QP_NUM_PER_VF;
} else if (req_pairs > cur_pairs + pf->qp_pool.num_free) {
PMD_DRV_LOG(ERR, "VF %d requested %d queues (rounded to %d) "
- "but only %d available\n",
+ "but only %d available",
vf->vf_idx,
vfres->num_queue_pairs,
req_pairs,
@@ -1550,7 +1550,7 @@ check:
if (first_cycle && cur_cycle < first_cycle +
(uint64_t)pf->vf_msg_cfg.period * rte_get_timer_hz()) {
PMD_DRV_LOG(WARNING, "VF %u too much messages(%u in %u"
- " seconds),\n\tany new message from which"
+ " seconds), any new message from which"
" will be ignored during next %u seconds!",
vf_id, pf->vf_msg_cfg.max_msg,
(uint32_t)((cur_cycle - first_cycle +
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5e693cb1ea..e65e8829d9 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1229,11 +1229,11 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ctx_txd->type_cmd_tso_mss =
rte_cpu_to_le_64(cd_type_cmd_tso_mss);
- PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]:\n"
- "tunneling_params: %#x;\n"
- "l2tag2: %#hx;\n"
- "rsvd: %#hx;\n"
- "type_cmd_tso_mss: %#"PRIx64";\n",
+ PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]: "
+ "tunneling_params: %#x; "
+ "l2tag2: %#hx; "
+ "rsvd: %#hx; "
+ "type_cmd_tso_mss: %#"PRIx64";",
tx_pkt, tx_id,
ctx_txd->tunneling_params,
ctx_txd->l2tag2,
@@ -1276,12 +1276,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txd = &txr[tx_id];
txn = &sw_ring[txe->next_id];
}
- PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n"
- "buf_dma_addr: %#"PRIx64";\n"
- "td_cmd: %#x;\n"
- "td_offset: %#x;\n"
- "td_len: %u;\n"
- "td_tag: %#x;\n",
+ PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]: "
+ "buf_dma_addr: %#"PRIx64"; "
+ "td_cmd: %#x; "
+ "td_offset: %#x; "
+ "td_len: %u; "
+ "td_tag: %#x;",
tx_pkt, tx_id, buf_dma_addr,
td_cmd, td_offset, slen, td_tag);
@@ -3467,7 +3467,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
txq->queue_id);
else
PMD_INIT_LOG(DEBUG,
- "Neither simple nor vector Tx enabled on Tx queue %u\n",
+ "Neither simple nor vector Tx enabled on Tx queue %u",
txq->queue_id);
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 54bff05675..9087909ec2 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2301,7 +2301,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
if (!kvlist) {
- PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ PMD_INIT_LOG(ERR, "invalid kvargs key");
return -EINVAL;
}
@@ -2336,7 +2336,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
if (ad->devargs.quanta_size != 0 &&
(ad->devargs.quanta_size < 256 || ad->devargs.quanta_size > 4096 ||
ad->devargs.quanta_size & 0x40)) {
- PMD_INIT_LOG(ERR, "invalid quanta size\n");
+ PMD_INIT_LOG(ERR, "invalid quanta size");
ret = -EINVAL;
goto bail;
}
@@ -2972,12 +2972,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
*/
ret = iavf_check_vf_reset_done(hw);
if (ret) {
- PMD_DRV_LOG(ERR, "Wait too long for reset done!\n");
+ PMD_DRV_LOG(ERR, "Wait too long for reset done!");
return ret;
}
iavf_set_no_poll(adapter, false);
- PMD_DRV_LOG(DEBUG, "Start dev_reset ...\n");
+ PMD_DRV_LOG(DEBUG, "Start dev_reset ...");
ret = iavf_dev_uninit(dev);
if (ret)
return ret;
@@ -3022,7 +3022,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
return;
if (!iavf_is_reset_detected(adapter)) {
- PMD_DRV_LOG(DEBUG, "reset not start\n");
+ PMD_DRV_LOG(DEBUG, "reset not start");
return;
}
@@ -3049,7 +3049,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
goto exit;
error:
- PMD_DRV_LOG(DEBUG, "RESET recover with error code=%d\n", ret);
+ PMD_DRV_LOG(DEBUG, "RESET recover with error code=%dn", ret);
exit:
vf->in_reset_recovery = false;
iavf_set_no_poll(adapter, false);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f19aa14646..ec0dffa30e 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -3027,7 +3027,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
up = m->vlan_tci >> IAVF_VLAN_TAG_PCP_OFFSET;
if (!(vf->qos_cap->cap[txq->tc].tc_prio & BIT(up))) {
- PMD_TX_LOG(ERR, "packet with vlan pcp %u cannot transmit in queue %u\n",
+ PMD_TX_LOG(ERR, "packet with vlan pcp %u cannot transmit in queue %u",
up, txq->queue_id);
return -1;
} else {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 5d845bba31..a025b0ea7f 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1646,7 +1646,7 @@ ice_dcf_init_repr_info(struct ice_dcf_adapter *dcf_adapter)
dcf_adapter->real_hw.num_vfs,
sizeof(dcf_adapter->repr_infos[0]), 0);
if (!dcf_adapter->repr_infos) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory for VF representors\n");
+ PMD_DRV_LOG(ERR, "Failed to alloc memory for VF representors");
return -ENOMEM;
}
@@ -2087,7 +2087,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
}
if (dcf_adapter->real_hw.vf_vsi_map[vf_id] == dcf_vsi_id) {
- PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.\n", vf_id);
+ PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.", vf_id);
ret = -EINVAL;
break;
}
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index af281f069a..564ff02fd8 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -133,7 +133,7 @@ ice_dcf_vf_repr_hw(struct ice_dcf_vf_repr *repr)
struct ice_dcf_adapter *dcf_adapter;
if (!repr->dcf_valid) {
- PMD_DRV_LOG(ERR, "DCF for VF representor has been released\n");
+ PMD_DRV_LOG(ERR, "DCF for VF representor has been released");
return NULL;
}
@@ -272,7 +272,7 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
- "Disable the port VLAN firstly\n");
+ "Disable the port VLAN firstly");
return -EINVAL;
}
@@ -318,7 +318,7 @@ ice_dcf_vf_repr_vlan_pvid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
PMD_DRV_LOG(ERR,
- "Disable the VLAN stripping firstly\n");
+ "Disable the VLAN stripping firstly");
return -EINVAL;
}
@@ -367,7 +367,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
- "Can accelerate only outer VLAN in QinQ\n");
+ "Can accelerate only outer VLAN in QinQ");
return -EINVAL;
}
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
tpid != RTE_ETHER_TYPE_VLAN &&
tpid != RTE_ETHER_TYPE_QINQ1) {
PMD_DRV_LOG(ERR,
- "Invalid TPID: 0x%04x\n", tpid);
+ "Invalid TPID: 0x%04x", tpid);
return -EINVAL;
}
@@ -387,7 +387,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
true);
if (err) {
PMD_DRV_LOG(ERR,
- "Failed to reset port VLAN : %d\n",
+ "Failed to reset port VLAN : %d",
err);
return err;
}
@@ -398,7 +398,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
- "Failed to reset VLAN stripping : %d\n",
+ "Failed to reset VLAN stripping : %d",
err);
return err;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c1d2b91ad7..86f43050a5 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1867,7 +1867,7 @@ no_dsn:
strncpy(pkg_file, ICE_PKG_FILE_DEFAULT, ICE_MAX_PKG_FILENAME_SIZE);
if (rte_firmware_read(pkg_file, &buf, &bufsz) < 0) {
- PMD_INIT_LOG(ERR, "failed to search file path\n");
+ PMD_INIT_LOG(ERR, "failed to search file path");
return -1;
}
@@ -1876,7 +1876,7 @@ load_fw:
err = ice_copy_and_init_pkg(hw, buf, bufsz);
if (!ice_is_init_pkg_successful(err)) {
- PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d\n", err);
+ PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d", err);
free(buf);
return -1;
}
@@ -2074,7 +2074,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
kvlist = rte_kvargs_parse(devargs->args, ice_valid_args);
if (kvlist == NULL) {
- PMD_INIT_LOG(ERR, "Invalid kvargs key\n");
+ PMD_INIT_LOG(ERR, "Invalid kvargs key");
return -EINVAL;
}
@@ -2340,20 +2340,20 @@ ice_dev_init(struct rte_eth_dev *dev)
if (pos) {
if (rte_pci_read_config(pci_dev, &dsn_low, 4, pos + 4) < 0 ||
rte_pci_read_config(pci_dev, &dsn_high, 4, pos + 8) < 0) {
- PMD_INIT_LOG(ERR, "Failed to read pci config space\n");
+ PMD_INIT_LOG(ERR, "Failed to read pci config space");
} else {
use_dsn = true;
dsn = (uint64_t)dsn_high << 32 | dsn_low;
}
} else {
- PMD_INIT_LOG(ERR, "Failed to read device serial number\n");
+ PMD_INIT_LOG(ERR, "Failed to read device serial number");
}
ret = ice_load_pkg(pf->adapter, use_dsn, dsn);
if (ret == 0) {
ret = ice_init_hw_tbls(hw);
if (ret) {
- PMD_INIT_LOG(ERR, "ice_init_hw_tbls failed: %d\n", ret);
+ PMD_INIT_LOG(ERR, "ice_init_hw_tbls failed: %d", ret);
rte_free(hw->pkg_copy);
}
}
@@ -2405,14 +2405,14 @@ ice_dev_init(struct rte_eth_dev *dev)
ret = ice_aq_stop_lldp(hw, true, false, NULL);
if (ret != ICE_SUCCESS)
- PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
+ PMD_INIT_LOG(DEBUG, "lldp has already stopped");
ret = ice_init_dcb(hw, true);
if (ret != ICE_SUCCESS)
- PMD_INIT_LOG(DEBUG, "Failed to init DCB\n");
+ PMD_INIT_LOG(DEBUG, "Failed to init DCB");
/* Forward LLDP packets to default VSI */
ret = ice_vsi_config_sw_lldp(vsi, true);
if (ret != ICE_SUCCESS)
- PMD_INIT_LOG(DEBUG, "Failed to cfg lldp\n");
+ PMD_INIT_LOG(DEBUG, "Failed to cfg lldp");
/* register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ice_interrupt_handler, dev);
@@ -2439,7 +2439,7 @@ ice_dev_init(struct rte_eth_dev *dev)
if (hw->phy_cfg == ICE_PHY_E822) {
ret = ice_start_phy_timer_e822(hw, hw->pf_id, true);
if (ret)
- PMD_INIT_LOG(ERR, "Failed to start phy timer\n");
+ PMD_INIT_LOG(ERR, "Failed to start phy timer");
}
if (!ad->is_safe_mode) {
@@ -2686,7 +2686,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
status = ice_rem_rss_cfg(hw, vsi->idx, cfg);
if (status && status != ICE_ERR_DOES_NOT_EXIST) {
PMD_DRV_LOG(ERR,
- "ice_rem_rss_cfg failed for VSI:%d, error:%d\n",
+ "ice_rem_rss_cfg failed for VSI:%d, error:%d",
vsi->idx, status);
return -EBUSY;
}
@@ -2707,7 +2707,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
status = ice_add_rss_cfg(hw, vsi->idx, cfg);
if (status) {
PMD_DRV_LOG(ERR,
- "ice_add_rss_cfg failed for VSI:%d, error:%d\n",
+ "ice_add_rss_cfg failed for VSI:%d, error:%d",
vsi->idx, status);
return -EBUSY;
}
@@ -3102,7 +3102,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
ret = ice_rem_rss_cfg(hw, vsi_id, cfg);
if (ret && ret != ICE_ERR_DOES_NOT_EXIST)
- PMD_DRV_LOG(ERR, "remove rss cfg failed\n");
+ PMD_DRV_LOG(ERR, "remove rss cfg failed");
ice_rem_rss_cfg_post(pf, cfg->addl_hdrs);
@@ -3118,15 +3118,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
ret = ice_add_rss_cfg_pre(pf, cfg->addl_hdrs);
if (ret)
- PMD_DRV_LOG(ERR, "add rss cfg pre failed\n");
+ PMD_DRV_LOG(ERR, "add rss cfg pre failed");
ret = ice_add_rss_cfg(hw, vsi_id, cfg);
if (ret)
- PMD_DRV_LOG(ERR, "add rss cfg failed\n");
+ PMD_DRV_LOG(ERR, "add rss cfg failed");
ret = ice_add_rss_cfg_post(pf, cfg);
if (ret)
- PMD_DRV_LOG(ERR, "add rss cfg post failed\n");
+ PMD_DRV_LOG(ERR, "add rss cfg post failed");
return 0;
}
@@ -3316,7 +3316,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
if (rss_key_size > sizeof(default_key)) {
PMD_DRV_LOG(WARNING,
"requested size %u is larger than default %zu, "
- "only %zu bytes are gotten for key\n",
+ "only %zu bytes are gotten for key",
rss_key_size, sizeof(default_key),
sizeof(default_key));
}
@@ -3351,12 +3351,12 @@ static int ice_init_rss(struct ice_pf *pf)
if (nb_q == 0) {
PMD_DRV_LOG(WARNING,
- "RSS is not supported as rx queues number is zero\n");
+ "RSS is not supported as rx queues number is zero");
return 0;
}
if (is_safe_mode) {
- PMD_DRV_LOG(WARNING, "RSS is not supported in safe mode\n");
+ PMD_DRV_LOG(WARNING, "RSS is not supported in safe mode");
return 0;
}
@@ -4202,7 +4202,7 @@ ice_phy_conf_link(struct ice_hw *hw,
cfg.phy_type_low = phy_type_low & phy_caps->phy_type_low;
cfg.phy_type_high = phy_type_high & phy_caps->phy_type_high;
} else {
- PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!\n");
+ PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!");
cfg.phy_type_low = phy_caps->phy_type_low;
cfg.phy_type_high = phy_caps->phy_type_high;
}
@@ -5657,7 +5657,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
}
break;
default:
- PMD_DRV_LOG(WARNING, "SFF Module Type not recognized.\n");
+ PMD_DRV_LOG(WARNING, "SFF Module Type not recognized.");
return -EINVAL;
}
return 0;
@@ -5728,7 +5728,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
0, NULL);
PMD_DRV_LOG(DEBUG, "SFF %02X %02X %02X %X = "
"%02X%02X%02X%02X."
- "%02X%02X%02X%02X (%X)\n",
+ "%02X%02X%02X%02X (%X)",
addr, offset, page, is_sfp,
value[0], value[1],
value[2], value[3],
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 0b7920ad44..dd9130ace3 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -334,7 +334,7 @@ ice_fdir_counter_alloc(struct ice_pf *pf, uint32_t shared, uint32_t id)
}
if (!counter_free) {
- PMD_DRV_LOG(ERR, "No free counter found\n");
+ PMD_DRV_LOG(ERR, "No free counter found");
return NULL;
}
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index d8c46347d2..dad117679d 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -1242,13 +1242,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
ice_get_hw_vsi_num(hw, vsi_handle),
id);
if (ret) {
- PMD_DRV_LOG(ERR, "remove RSS flow failed\n");
+ PMD_DRV_LOG(ERR, "remove RSS flow failed");
return ret;
}
ret = ice_rem_prof(hw, ICE_BLK_RSS, id);
if (ret) {
- PMD_DRV_LOG(ERR, "remove RSS profile failed\n");
+ PMD_DRV_LOG(ERR, "remove RSS profile failed");
return ret;
}
}
@@ -1256,7 +1256,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
/* add new profile */
ret = ice_flow_set_hw_prof(hw, vsi_handle, 0, prof, ICE_BLK_RSS);
if (ret) {
- PMD_DRV_LOG(ERR, "HW profile add failed\n");
+ PMD_DRV_LOG(ERR, "HW profile add failed");
return ret;
}
@@ -1378,7 +1378,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
return 0;
err:
- PMD_DRV_LOG(ERR, "HW profile remove failed\n");
+ PMD_DRV_LOG(ERR, "HW profile remove failed");
return ret;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index dea6a5b535..7da314217a 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -2822,7 +2822,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
- "(port=%d queue=%d) value=0x%"PRIx64"\n",
+ "(port=%d queue=%d) value=0x%"PRIx64,
desc_to_clean_to,
txq->port_id, txq->queue_id,
txd[desc_to_clean_to].cmd_type_offset_bsz);
diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c
index 2c15611a23..baae80d661 100644
--- a/drivers/net/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c
@@ -203,7 +203,7 @@ ipn3ke_vbng_init_done(struct ipn3ke_hw *hw)
}
if (!timeout) {
- IPN3KE_AFU_PMD_ERR("IPN3KE vBNG INIT timeout.\n");
+ IPN3KE_AFU_PMD_ERR("IPN3KE vBNG INIT timeout.");
return -1;
}
@@ -348,7 +348,7 @@ ipn3ke_hw_init(struct rte_afu_device *afu_dev,
hw->acc_tm = 1;
hw->acc_flow = 1;
- IPN3KE_AFU_PMD_DEBUG("UPL_version is 0x%x\n",
+ IPN3KE_AFU_PMD_DEBUG("UPL_version is 0x%x",
IPN3KE_READ_REG(hw, 0));
}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index d20a29b9a2..a2f76268b5 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -993,7 +993,7 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
uint32_t time_out = MHL_COMMAND_TIME_COUNT;
uint32_t i;
- IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump start\n");
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump start");
pdata = (uint32_t *)flow->rule.key;
IPN3KE_AFU_PMD_DEBUG(" - key :");
@@ -1003,7 +1003,6 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
for (i = 0; i < 4; i++)
IPN3KE_AFU_PMD_DEBUG(" %02x", ipn3ke_swap32(pdata[3 - i]));
- IPN3KE_AFU_PMD_DEBUG("\n");
pdata = (uint32_t *)flow->rule.result;
IPN3KE_AFU_PMD_DEBUG(" - result:");
@@ -1013,7 +1012,7 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
for (i = 0; i < 1; i++)
IPN3KE_AFU_PMD_DEBUG(" %02x", pdata[i]);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump end\n");
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump end");
pdata = (uint32_t *)flow->rule.key;
@@ -1254,7 +1253,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_RX_TEST,
0,
0x1);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_TEST: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_TEST: %x", data);
/* configure base mac address */
IPN3KE_MASK_WRITE_REG(hw,
@@ -1268,7 +1267,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_BASE_DST_MAC_ADDR_HI,
0,
0xFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x", data);
IPN3KE_MASK_WRITE_REG(hw,
IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW,
@@ -1281,7 +1280,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW,
0,
0xFFFFFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x", data);
/* configure hash lookup rules enable */
@@ -1296,7 +1295,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_LKUP_ENABLE,
0,
0xFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x", data);
/* configure rx parse config, settings associated with VxLAN */
@@ -1311,7 +1310,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_RX_PARSE_CFG,
0,
0x3FFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_PARSE_CFG: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_PARSE_CFG: %x", data);
/* configure QinQ S-Tag */
@@ -1326,7 +1325,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_QINQ_STAG,
0,
0xFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_QINQ_STAG: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_QINQ_STAG: %x", data);
/* configure gen ctrl */
@@ -1341,7 +1340,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_MHL_GEN_CTRL,
0,
0x1F);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_GEN_CTRL: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_GEN_CTRL: %x", data);
/* clear monitoring register */
@@ -1356,7 +1355,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_MHL_MON_0,
0,
0xFFFFFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_MON_0: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_MON_0: %x", data);
ipn3ke_flow_hw_flush(hw);
@@ -1366,7 +1365,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_EM_NUM,
0,
0xFFFFFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_EN_NUM: %x\n", hw->flow_max_entries);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_EN_NUM: %x", hw->flow_max_entries);
hw->flow_num_entries = 0;
return 0;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 8145f1bb2a..feb57420c3 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2401,8 +2401,8 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
else
link->link_status = 0;
- IPN3KE_AFU_PMD_DEBUG("port is %d\n", port);
- IPN3KE_AFU_PMD_DEBUG("link->link_status is %d\n", link->link_status);
+ IPN3KE_AFU_PMD_DEBUG("port is %d", port);
+ IPN3KE_AFU_PMD_DEBUG("link->link_status is %d", link->link_status);
rawdev->dev_ops->attr_get(rawdev,
"LineSideLinkSpeed",
@@ -2479,14 +2479,14 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
if (!rpst->ori_linfo.link_status &&
link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Up\n", rpst->port_id);
+ IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Up", rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
rte_eth_linkstatus_set(ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Up\n",
+ IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Up",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
@@ -2494,7 +2494,7 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
}
} else if (rpst->ori_linfo.link_status &&
!link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Down\n",
+ IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Down",
rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
@@ -2502,7 +2502,7 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
rte_eth_linkstatus_set(ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Down\n",
+ IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Down",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
@@ -2537,14 +2537,14 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
if (!rpst->ori_linfo.link_status &&
link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Up\n", rpst->port_id);
+ IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Up", rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
rte_eth_linkstatus_set(rpst->ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Up\n",
+ IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Up",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
@@ -2552,14 +2552,14 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
}
} else if (rpst->ori_linfo.link_status &&
!link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Down\n", rpst->port_id);
+ IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Down", rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
rte_eth_linkstatus_set(rpst->ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Down\n",
+ IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Down",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 0260227900..44a8b88699 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -1934,10 +1934,10 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)
tm_id = tm->tm_id;
- IPN3KE_AFU_PMD_DEBUG("***HQoS Tree(%d)***\n", tm_id);
+ IPN3KE_AFU_PMD_DEBUG("***HQoS Tree(%d)***", tm_id);
port_n = tm->h.port_node;
- IPN3KE_AFU_PMD_DEBUG("Port: (%d|%s)\n", port_n->node_index,
+ IPN3KE_AFU_PMD_DEBUG("Port: (%d|%s)", port_n->node_index,
str_state[port_n->node_state]);
vt_nl = &tm->h.port_node->children_node_list;
@@ -1951,7 +1951,6 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)
cos_n->node_index,
str_state[cos_n->node_state]);
}
- IPN3KE_AFU_PMD_DEBUG("\n");
}
}
@@ -1969,14 +1968,13 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
tm_id = tm->tm_id;
- IPN3KE_AFU_PMD_DEBUG("***Commit Tree(%d)***\n", tm_id);
+ IPN3KE_AFU_PMD_DEBUG("***Commit Tree(%d)***", tm_id);
n = tm->h.port_commit_node;
IPN3KE_AFU_PMD_DEBUG("Port: ");
if (n)
IPN3KE_AFU_PMD_DEBUG("(%d|%s)",
n->node_index,
str_state[n->node_state]);
- IPN3KE_AFU_PMD_DEBUG("\n");
nl = &tm->h.vt_commit_node_list;
IPN3KE_AFU_PMD_DEBUG("VT : ");
@@ -1985,7 +1983,6 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
n->node_index,
str_state[n->node_state]);
}
- IPN3KE_AFU_PMD_DEBUG("\n");
nl = &tm->h.cos_commit_node_list;
IPN3KE_AFU_PMD_DEBUG("COS : ");
@@ -1994,7 +1991,6 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
n->node_index,
str_state[n->node_state]);
}
- IPN3KE_AFU_PMD_DEBUG("\n");
}
/* Traffic manager hierarchy commit */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a44497ce51..3ac65ca3b3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1154,10 +1154,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
}
if (hw->mac.ops.fw_recovery_mode && hw->mac.ops.fw_recovery_mode(hw)) {
- PMD_INIT_LOG(ERR, "\nERROR: "
- "Firmware recovery mode detected. Limiting functionality.\n"
- "Refer to the Intel(R) Ethernet Adapters and Devices "
- "User Guide for details on firmware recovery mode.");
+ PMD_INIT_LOG(ERR, "ERROR: Firmware recovery mode detected. Limiting functionality.");
return -EIO;
}
@@ -1782,7 +1779,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
if (eth_da.nb_representor_ports > 0 &&
eth_da.type != RTE_ETH_REPRESENTOR_VF) {
- PMD_DRV_LOG(ERR, "unsupported representor type: %s\n",
+ PMD_DRV_LOG(ERR, "unsupported representor type: %s",
pci_dev->device.devargs->args);
return -ENOTSUP;
}
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index d331308556..3a666ba15f 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -120,7 +120,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
/* Fail if no match and no free entries*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx IP table\n");
+ "No free entry left in the Rx IP table");
return -1;
}
@@ -134,7 +134,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx SA table\n");
+ "No free entry left in the Rx SA table");
return -1;
}
@@ -232,7 +232,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Tx SA table\n");
+ "No free entry left in the Tx SA table");
return -1;
}
@@ -291,7 +291,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx IP table\n");
+ "Entry not found in the Rx IP table");
return -1;
}
@@ -306,7 +306,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx SA table\n");
+ "Entry not found in the Rx SA table");
return -1;
}
@@ -349,7 +349,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Tx SA table\n");
+ "Entry not found in the Tx SA table");
return -1;
}
reg_val = IPSRXIDX_WRITE | (sa_index << 3);
@@ -379,7 +379,7 @@ ixgbe_crypto_create_session(void *device,
if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
conf->crypto_xform->aead.algo !=
RTE_CRYPTO_AEAD_AES_GCM) {
- PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode");
return -ENOTSUP;
}
aead_xform = &conf->crypto_xform->aead;
@@ -388,14 +388,14 @@ ixgbe_crypto_create_session(void *device,
if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled");
return -ENOTSUP;
}
} else {
if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled");
return -ENOTSUP;
}
}
@@ -409,7 +409,7 @@ ixgbe_crypto_create_session(void *device,
if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
if (ixgbe_crypto_add_sa(ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ PMD_DRV_LOG(ERR, "Failed to add SA");
return -EPERM;
}
}
@@ -431,12 +431,12 @@ ixgbe_crypto_remove_session(void *device,
struct ixgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);
if (eth_dev != ic_session->dev) {
- PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ PMD_DRV_LOG(ERR, "Session not bound to this device");
return -ENODEV;
}
if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ PMD_DRV_LOG(ERR, "Failed to remove session");
return -EFAULT;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 0a0f639e39..002bc71c2a 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -171,14 +171,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
struct ixgbe_ethertype_filter ethertype_filter;
if (!hw->mac.ops.set_ethertype_anti_spoofing) {
- PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.\n");
+ PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.");
return;
}
i = ixgbe_ethertype_filter_lookup(filter_info,
IXGBE_ETHERTYPE_FLOW_CTRL);
if (i >= 0) {
- PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!\n");
+ PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!");
return;
}
@@ -191,7 +191,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
i = ixgbe_ethertype_filter_insert(filter_info,
ðertype_filter);
if (i < 0) {
- PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.\n");
+ PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.");
return;
}
@@ -422,7 +422,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
vmolr = IXGBE_READ_REG(hw, IXGBE_VMOLR(vf));
- PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);
vmolr &= ~IXGBE_VMOLR_MPE;
@@ -628,7 +628,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
break;
}
- PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
api_version, vf);
return -1;
@@ -677,7 +677,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
case RTE_ETH_MQ_TX_NONE:
case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
- ", but its tx mode = %d\n", vf,
+ ", but its tx mode = %d", vf,
eth_conf->txmode.mq_mode);
return -1;
@@ -711,7 +711,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
break;
default:
- PMD_DRV_LOG(ERR, "PF work with invalid mode = %d\n",
+ PMD_DRV_LOG(ERR, "PF work with invalid mode = %d",
eth_conf->txmode.mq_mode);
return -1;
}
@@ -767,7 +767,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (!(fctrl & IXGBE_FCTRL_UPE)) {
/* VF promisc requires PF in promisc */
PMD_DRV_LOG(ERR,
- "Enabling VF promisc requires PF in promisc\n");
+ "Enabling VF promisc requires PF in promisc");
return -1;
}
@@ -804,7 +804,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (index) {
if (!rte_is_valid_assigned_ether_addr(
(struct rte_ether_addr *)new_mac)) {
- PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
return -1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index f76ef63921..15c28e7a3f 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -955,7 +955,7 @@ STATIC s32 rte_pmd_ixgbe_acquire_swfw(struct ixgbe_hw *hw, u32 mask)
while (--retries) {
status = ixgbe_acquire_swfw_semaphore(hw, mask);
if (status) {
- PMD_DRV_LOG(ERR, "Get SWFW sem failed, Status = %d\n",
+ PMD_DRV_LOG(ERR, "Get SWFW sem failed, Status = %d",
status);
return status;
}
@@ -964,18 +964,18 @@ STATIC s32 rte_pmd_ixgbe_acquire_swfw(struct ixgbe_hw *hw, u32 mask)
return IXGBE_SUCCESS;
if (status == IXGBE_ERR_TOKEN_RETRY)
- PMD_DRV_LOG(ERR, "Get PHY token failed, Status = %d\n",
+ PMD_DRV_LOG(ERR, "Get PHY token failed, Status = %d",
status);
ixgbe_release_swfw_semaphore(hw, mask);
if (status != IXGBE_ERR_TOKEN_RETRY) {
PMD_DRV_LOG(ERR,
- "Retry get PHY token failed, Status=%d\n",
+ "Retry get PHY token failed, Status=%d",
status);
return status;
}
}
- PMD_DRV_LOG(ERR, "swfw acquisition retries failed!: PHY ID = 0x%08X\n",
+ PMD_DRV_LOG(ERR, "swfw acquisition retries failed!: PHY ID = 0x%08X",
hw->phy.id);
return status;
}
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 18377d9caf..f05f4c24df 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -1292,7 +1292,7 @@ memif_connect(struct rte_eth_dev *dev)
PROT_READ | PROT_WRITE,
MAP_SHARED, mr->fd, 0);
if (mr->addr == MAP_FAILED) {
- MIF_LOG(ERR, "mmap failed: %s\n",
+ MIF_LOG(ERR, "mmap failed: %s",
strerror(errno));
return -1;
}
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index a1a7e93288..7c0ac6888b 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -106,7 +106,7 @@ mlx4_init_shared_data(void)
sizeof(*mlx4_shared_data),
SOCKET_ID_ANY, 0);
if (mz == NULL) {
- ERROR("Cannot allocate mlx4 shared data\n");
+ ERROR("Cannot allocate mlx4 shared data");
ret = -rte_errno;
goto error;
}
@@ -117,7 +117,7 @@ mlx4_init_shared_data(void)
/* Lookup allocated shared memory. */
mz = rte_memzone_lookup(MZ_MLX4_PMD_SHARED_DATA);
if (mz == NULL) {
- ERROR("Cannot attach mlx4 shared data\n");
+ ERROR("Cannot attach mlx4 shared data");
ret = -rte_errno;
goto error;
}
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 9bf1ec5509..297ff3fb31 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -257,7 +257,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
if (tx_free_thresh + 3 >= nb_desc) {
PMD_INIT_LOG(ERR,
"tx_free_thresh must be less than the number of TX entries minus 3(%u)."
- " (tx_free_thresh=%u port=%u queue=%u)\n",
+ " (tx_free_thresh=%u port=%u queue=%u)",
nb_desc - 3,
tx_free_thresh, dev->data->port_id, queue_idx);
return -EINVAL;
@@ -902,7 +902,7 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv,
if (!rxq->rxbuf_info) {
PMD_DRV_LOG(ERR,
- "Could not allocate rxbuf info for queue %d\n",
+ "Could not allocate rxbuf info for queue %d",
queue_id);
rte_free(rxq->event_buf);
rte_free(rxq);
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 4dced0d328..68b0a8b8ab 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -1067,7 +1067,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
u32 i;
if (rte_pci_set_bus_master(pci_dev, enable) < 0) {
- DEBUGOUT("Cannot configure PCI bus master\n");
+ DEBUGOUT("Cannot configure PCI bus master");
return -1;
}
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fb86e7b10d..4321924cb9 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -381,7 +381,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
ssid = ngbe_flash_read_dword(hw, 0xFFFDC);
if (ssid == 0x1) {
PMD_INIT_LOG(ERR,
- "Read of internal subsystem device id failed\n");
+ "Read of internal subsystem device id failed");
return -ENODEV;
}
hw->sub_system_id = (u16)ssid >> 8 | (u16)ssid << 8;
diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c
index 947ae7fe94..bb62e2fbb7 100644
--- a/drivers/net/ngbe/ngbe_pf.c
+++ b/drivers/net/ngbe/ngbe_pf.c
@@ -71,7 +71,7 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev)
sizeof(struct ngbe_vf_info) * vf_num, 0);
if (*vfinfo == NULL) {
PMD_INIT_LOG(ERR,
- "Cannot allocate memory for private VF data\n");
+ "Cannot allocate memory for private VF data");
return -ENOMEM;
}
@@ -320,7 +320,7 @@ ngbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)
vmolr = rd32(hw, NGBE_POOLETHCTL(vf));
- PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);
vmolr &= ~NGBE_POOLETHCTL_MCP;
@@ -482,7 +482,7 @@ ngbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
break;
}
- PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
api_version, vf);
return -1;
@@ -564,7 +564,7 @@ ngbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
if (!(fctrl & NGBE_PSRCTL_UCP)) {
/* VF promisc requires PF in promisc */
PMD_DRV_LOG(ERR,
- "Enabling VF promisc requires PF in promisc\n");
+ "Enabling VF promisc requires PF in promisc");
return -1;
}
@@ -601,7 +601,7 @@ ngbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (index) {
if (!rte_is_valid_assigned_ether_addr(ea)) {
- PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
return -1;
}
diff --git a/drivers/net/octeon_ep/cnxk_ep_tx.c b/drivers/net/octeon_ep/cnxk_ep_tx.c
index 9f11a2f317..8628edf8a7 100644
--- a/drivers/net/octeon_ep/cnxk_ep_tx.c
+++ b/drivers/net/octeon_ep/cnxk_ep_tx.c
@@ -139,7 +139,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
num_sg = (frags + mask) / OTX_EP_NUM_SG_PTRS;
if (unlikely(pkt_len > OTX_EP_MAX_PKT_SZ && num_sg > OTX_EP_MAX_SG_LISTS)) {
- otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments\n");
+ otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments");
goto exit;
}
diff --git a/drivers/net/octeon_ep/cnxk_ep_vf.c b/drivers/net/octeon_ep/cnxk_ep_vf.c
index ef275703c3..74b63a161f 100644
--- a/drivers/net/octeon_ep/cnxk_ep_vf.c
+++ b/drivers/net/octeon_ep/cnxk_ep_vf.c
@@ -102,7 +102,7 @@ cnxk_ep_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
}
if (loop < 0) {
- otx_ep_err("IDLE bit is not set\n");
+ otx_ep_err("IDLE bit is not set");
return -EIO;
}
@@ -134,7 +134,7 @@ cnxk_ep_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
} while (reg_val != 0 && loop--);
if (loop < 0) {
- otx_ep_err("INST CNT REGISTER is not zero\n");
+ otx_ep_err("INST CNT REGISTER is not zero");
return -EIO;
}
@@ -181,7 +181,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("OUT CNT REGISTER value is zero\n");
+ otx_ep_err("OUT CNT REGISTER value is zero");
return -EIO;
}
@@ -217,7 +217,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("Packets credit register value is not cleared\n");
+ otx_ep_err("Packets credit register value is not cleared");
return -EIO;
}
@@ -250,7 +250,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("Packets sent register value is not cleared\n");
+ otx_ep_err("Packets sent register value is not cleared");
return -EIO;
}
@@ -280,7 +280,7 @@ cnxk_ep_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
}
if (loop < 0) {
- otx_ep_err("INSTR DBELL not coming back to 0\n");
+ otx_ep_err("INSTR DBELL not coming back to 0");
return -EIO;
}
diff --git a/drivers/net/octeon_ep/otx2_ep_vf.c b/drivers/net/octeon_ep/otx2_ep_vf.c
index 7f4edf8dcf..fdab542246 100644
--- a/drivers/net/octeon_ep/otx2_ep_vf.c
+++ b/drivers/net/octeon_ep/otx2_ep_vf.c
@@ -37,7 +37,7 @@ otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no)
SDP_VF_R_IN_INSTR_DBELL(q_no));
}
if (loop < 0) {
- otx_ep_err("%s: doorbell init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: doorbell init retry limit exceeded.", __func__);
return -EIO;
}
@@ -48,7 +48,7 @@ otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no)
rte_delay_ms(1);
} while ((d64 & ~SDP_VF_R_IN_CNTS_OUT_INT) != 0 && loop--);
if (loop < 0) {
- otx_ep_err("%s: in_cnts init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: in_cnts init retry limit exceeded.", __func__);
return -EIO;
}
@@ -81,7 +81,7 @@ otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no)
SDP_VF_R_OUT_SLIST_DBELL(q_no));
}
if (loop < 0) {
- otx_ep_err("%s: doorbell init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: doorbell init retry limit exceeded.", __func__);
return -EIO;
}
@@ -109,7 +109,7 @@ otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no)
rte_delay_ms(1);
} while ((d64 & ~SDP_VF_R_OUT_CNTS_IN_INT) != 0 && loop--);
if (loop < 0) {
- otx_ep_err("%s: out_cnts init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: out_cnts init retry limit exceeded.", __func__);
return -EIO;
}
@@ -252,7 +252,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
}
if (loop < 0) {
- otx_ep_err("IDLE bit is not set\n");
+ otx_ep_err("IDLE bit is not set");
return -EIO;
}
@@ -283,7 +283,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
} while (reg_val != 0 && loop--);
if (loop < 0) {
- otx_ep_err("INST CNT REGISTER is not zero\n");
+ otx_ep_err("INST CNT REGISTER is not zero");
return -EIO;
}
@@ -332,7 +332,7 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("OUT CNT REGISTER value is zero\n");
+ otx_ep_err("OUT CNT REGISTER value is zero");
return -EIO;
}
@@ -368,7 +368,7 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("Packets credit register value is not cleared\n");
+ otx_ep_err("Packets credit register value is not cleared");
return -EIO;
}
otx_ep_dbg("SDP_R[%d]_credit:%x", oq_no, rte_read32(droq->pkts_credit_reg));
@@ -425,7 +425,7 @@ otx2_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
}
if (loop < 0) {
- otx_ep_err("INSTR DBELL not coming back to 0\n");
+ otx_ep_err("INSTR DBELL not coming back to 0");
return -EIO;
}
diff --git a/drivers/net/octeon_ep/otx_ep_common.h b/drivers/net/octeon_ep/otx_ep_common.h
index 82e57520d3..938c51b35d 100644
--- a/drivers/net/octeon_ep/otx_ep_common.h
+++ b/drivers/net/octeon_ep/otx_ep_common.h
@@ -119,7 +119,7 @@ union otx_ep_instr_irh {
{\
typeof(value) val = (value); \
typeof(reg_off) off = (reg_off); \
- otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx\n", \
+ otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx", \
(unsigned long)off, (unsigned long long)val); \
rte_write64(val, ((base_addr) + off)); \
}
diff --git a/drivers/net/octeon_ep/otx_ep_ethdev.c b/drivers/net/octeon_ep/otx_ep_ethdev.c
index 615cbbb648..c0298a56ac 100644
--- a/drivers/net/octeon_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeon_ep/otx_ep_ethdev.c
@@ -118,7 +118,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
ret = otx_ep_mbox_get_link_info(eth_dev, &link);
if (ret)
return -EINVAL;
- otx_ep_dbg("link status resp link %d duplex %d autoneg %d link_speed %d\n",
+ otx_ep_dbg("link status resp link %d duplex %d autoneg %d link_speed %d",
link.link_status, link.link_duplex, link.link_autoneg, link.link_speed);
return rte_eth_linkstatus_set(eth_dev, &link);
}
@@ -163,7 +163,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
ret = otx_ep_mbox_set_mac_addr(eth_dev, mac_addr);
if (ret)
return -EINVAL;
- otx_ep_dbg("Default MAC address " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Default MAC address " RTE_ETHER_ADDR_PRT_FMT "",
RTE_ETHER_ADDR_BYTES(mac_addr));
rte_ether_addr_copy(mac_addr, eth_dev->data->mac_addrs);
return 0;
@@ -180,7 +180,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
/* Enable IQ/OQ for this device */
ret = otx_epvf->fn_list.enable_io_queues(otx_epvf);
if (ret) {
- otx_ep_err("IOQ enable failed\n");
+ otx_ep_err("IOQ enable failed");
return ret;
}
@@ -189,7 +189,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
otx_epvf->droq[q]->pkts_credit_reg);
rte_wmb();
- otx_ep_info("OQ[%d] dbells [%d]\n", q,
+ otx_ep_info("OQ[%d] dbells [%d]", q,
rte_read32(otx_epvf->droq[q]->pkts_credit_reg));
}
@@ -198,7 +198,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
otx_ep_set_tx_func(eth_dev);
otx_ep_set_rx_func(eth_dev);
- otx_ep_info("dev started\n");
+ otx_ep_info("dev started");
for (q = 0; q < eth_dev->data->nb_rx_queues; q++)
eth_dev->data->rx_queue_state[q] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -241,7 +241,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
/* Same DMA buffer is shared by OQ and IQ, clear it at start */
memset(otx_epvf->ism_buffer_mz->addr, 0, OTX_EP_ISM_BUFFER_SIZE);
if (otx_epvf->ism_buffer_mz == NULL) {
- otx_ep_err("Failed to allocate ISM buffer\n");
+ otx_ep_err("Failed to allocate ISM buffer");
return(-1);
}
otx_ep_dbg("ISM: virt: 0x%p, dma: 0x%" PRIX64,
@@ -285,12 +285,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
ret = -EINVAL;
break;
default:
- otx_ep_err("Unsupported device\n");
+ otx_ep_err("Unsupported device");
ret = -EINVAL;
}
if (!ret)
- otx_ep_info("OTX_EP dev_id[%d]\n", dev_id);
+ otx_ep_info("OTX_EP dev_id[%d]", dev_id);
return ret;
}
@@ -304,7 +304,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
ret = otx_ep_chip_specific_setup(otx_epvf);
if (ret) {
- otx_ep_err("Chip specific setup failed\n");
+ otx_ep_err("Chip specific setup failed");
goto setup_fail;
}
@@ -328,7 +328,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->eth_dev->rx_pkt_burst = &cnxk_ep_recv_pkts;
otx_epvf->chip_gen = OTX_EP_CN10XX;
} else {
- otx_ep_err("Invalid chip_id\n");
+ otx_ep_err("Invalid chip_id");
ret = -EINVAL;
goto setup_fail;
}
@@ -336,7 +336,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->max_rx_queues = ethdev_queues;
otx_epvf->max_tx_queues = ethdev_queues;
- otx_ep_info("OTX_EP Device is Ready\n");
+ otx_ep_info("OTX_EP Device is Ready");
setup_fail:
return ret;
@@ -356,10 +356,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
txmode = &conf->txmode;
if (eth_dev->data->nb_rx_queues > otx_epvf->max_rx_queues ||
eth_dev->data->nb_tx_queues > otx_epvf->max_tx_queues) {
- otx_ep_err("invalid num queues\n");
+ otx_ep_err("invalid num queues");
return -EINVAL;
}
- otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d\n",
+ otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d",
eth_dev->data->nb_rx_queues, eth_dev->data->nb_tx_queues);
otx_epvf->rx_offloads = rxmode->offloads;
@@ -403,29 +403,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
uint16_t buf_size;
if (q_no >= otx_epvf->max_rx_queues) {
- otx_ep_err("Invalid rx queue number %u\n", q_no);
+ otx_ep_err("Invalid rx queue number %u", q_no);
return -EINVAL;
}
if (num_rx_descs & (num_rx_descs - 1)) {
- otx_ep_err("Invalid rx desc number should be pow 2 %u\n",
+ otx_ep_err("Invalid rx desc number should be pow 2 %u",
num_rx_descs);
return -EINVAL;
}
if (num_rx_descs < (SDP_GBL_WMARK * 8)) {
- otx_ep_err("Invalid rx desc number(%u) should at least be greater than 8xwmark %u\n",
+ otx_ep_err("Invalid rx desc number(%u) should at least be greater than 8xwmark %u",
num_rx_descs, (SDP_GBL_WMARK * 8));
return -EINVAL;
}
- otx_ep_dbg("setting up rx queue %u\n", q_no);
+ otx_ep_dbg("setting up rx queue %u", q_no);
mbp_priv = rte_mempool_get_priv(mp);
buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (otx_ep_setup_oqs(otx_epvf, q_no, num_rx_descs, buf_size, mp,
socket_id)) {
- otx_ep_err("droq allocation failed\n");
+ otx_ep_err("droq allocation failed");
return -1;
}
@@ -454,7 +454,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
int q_id = rq->q_no;
if (otx_ep_delete_oqs(otx_epvf, q_id))
- otx_ep_err("Failed to delete OQ:%d\n", q_id);
+ otx_ep_err("Failed to delete OQ:%d", q_id);
}
/**
@@ -488,16 +488,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
int retval;
if (q_no >= otx_epvf->max_tx_queues) {
- otx_ep_err("Invalid tx queue number %u\n", q_no);
+ otx_ep_err("Invalid tx queue number %u", q_no);
return -EINVAL;
}
if (num_tx_descs & (num_tx_descs - 1)) {
- otx_ep_err("Invalid tx desc number should be pow 2 %u\n",
+ otx_ep_err("Invalid tx desc number should be pow 2 %u",
num_tx_descs);
return -EINVAL;
}
if (num_tx_descs < (SDP_GBL_WMARK * 8)) {
- otx_ep_err("Invalid tx desc number(%u) should at least be greater than 8*wmark(%u)\n",
+ otx_ep_err("Invalid tx desc number(%u) should at least be greater than 8*wmark(%u)",
num_tx_descs, (SDP_GBL_WMARK * 8));
return -EINVAL;
}
@@ -505,12 +505,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
retval = otx_ep_setup_iqs(otx_epvf, q_no, num_tx_descs, socket_id);
if (retval) {
- otx_ep_err("IQ(TxQ) creation failed.\n");
+ otx_ep_err("IQ(TxQ) creation failed.");
return retval;
}
eth_dev->data->tx_queues[q_no] = otx_epvf->instr_queue[q_no];
- otx_ep_dbg("tx queue[%d] setup\n", q_no);
+ otx_ep_dbg("tx queue[%d] setup", q_no);
return 0;
}
@@ -603,23 +603,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
num_queues = otx_epvf->nb_rx_queues;
for (q_no = 0; q_no < num_queues; q_no++) {
if (otx_ep_delete_oqs(otx_epvf, q_no)) {
- otx_ep_err("Failed to delete OQ:%d\n", q_no);
+ otx_ep_err("Failed to delete OQ:%d", q_no);
return -EINVAL;
}
}
- otx_ep_dbg("Num OQs:%d freed\n", otx_epvf->nb_rx_queues);
+ otx_ep_dbg("Num OQs:%d freed", otx_epvf->nb_rx_queues);
num_queues = otx_epvf->nb_tx_queues;
for (q_no = 0; q_no < num_queues; q_no++) {
if (otx_ep_delete_iqs(otx_epvf, q_no)) {
- otx_ep_err("Failed to delete IQ:%d\n", q_no);
+ otx_ep_err("Failed to delete IQ:%d", q_no);
return -EINVAL;
}
}
- otx_ep_dbg("Num IQs:%d freed\n", otx_epvf->nb_tx_queues);
+ otx_ep_dbg("Num IQs:%d freed", otx_epvf->nb_tx_queues);
if (rte_eth_dma_zone_free(eth_dev, "ism", 0)) {
- otx_ep_err("Failed to delete ISM buffer\n");
+ otx_ep_err("Failed to delete ISM buffer");
return -EINVAL;
}
@@ -635,7 +635,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
ret = otx_ep_mbox_get_mac_addr(eth_dev, mac_addr);
if (ret)
return -EINVAL;
- otx_ep_dbg("Get MAC address " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Get MAC address " RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
return 0;
}
@@ -684,22 +684,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
ret_val = otx_ep_dev_get_mac_addr(eth_dev, mac_addr);
if (!ret_val) {
if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
- otx_ep_dbg("PF doesn't have valid VF MAC addr" RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("PF doesn't have valid VF MAC addr" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
rte_eth_random_addr(mac_addr->addr_bytes);
- otx_ep_dbg("Setting Random MAC address" RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Setting Random MAC address" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
ret_val = otx_ep_dev_set_default_mac_addr(eth_dev, mac_addr);
if (ret_val) {
- otx_ep_err("Setting MAC address " RTE_ETHER_ADDR_PRT_FMT "fails\n",
+ otx_ep_err("Setting MAC address " RTE_ETHER_ADDR_PRT_FMT "fails",
RTE_ETHER_ADDR_BYTES(mac_addr));
return ret_val;
}
}
- otx_ep_dbg("Received valid MAC addr from PF" RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Received valid MAC addr from PF" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
} else {
- otx_ep_err("Getting MAC address from PF via Mbox fails with ret_val: %d\n",
+ otx_ep_err("Getting MAC address from PF via Mbox fails with ret_val: %d",
ret_val);
return ret_val;
}
@@ -734,7 +734,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->mbox_neg_ver = OTX_EP_MBOX_VERSION_V1;
eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL) {
- otx_ep_err("MAC addresses memory allocation failed\n");
+ otx_ep_err("MAC addresses memory allocation failed");
eth_dev->dev_ops = NULL;
return -ENOMEM;
}
@@ -754,12 +754,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->chip_id == PCI_DEVID_CNF10KA_EP_NET_VF ||
otx_epvf->chip_id == PCI_DEVID_CNF10KB_EP_NET_VF) {
otx_epvf->pkind = SDP_OTX2_PKIND_FS0;
- otx_ep_info("using pkind %d\n", otx_epvf->pkind);
+ otx_ep_info("using pkind %d", otx_epvf->pkind);
} else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF) {
otx_epvf->pkind = SDP_PKIND;
- otx_ep_info("Using pkind %d.\n", otx_epvf->pkind);
+ otx_ep_info("Using pkind %d.", otx_epvf->pkind);
} else {
- otx_ep_err("Invalid chip id\n");
+ otx_ep_err("Invalid chip id");
return -EINVAL;
}
@@ -768,7 +768,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
if (otx_ep_eth_dev_query_set_vf_mac(eth_dev,
(struct rte_ether_addr *)&vf_mac_addr)) {
- otx_ep_err("set mac addr failed\n");
+ otx_ep_err("set mac addr failed");
return -ENODEV;
}
rte_ether_addr_copy(&vf_mac_addr, eth_dev->data->mac_addrs);
diff --git a/drivers/net/octeon_ep/otx_ep_mbox.c b/drivers/net/octeon_ep/otx_ep_mbox.c
index 4118645dc7..c92adeaf9a 100644
--- a/drivers/net/octeon_ep/otx_ep_mbox.c
+++ b/drivers/net/octeon_ep/otx_ep_mbox.c
@@ -44,11 +44,11 @@ __otx_ep_send_mbox_cmd(struct otx_ep_device *otx_ep,
}
}
if (count == OTX_EP_MBOX_TIMEOUT_MS) {
- otx_ep_err("mbox send Timeout count:%d\n", count);
+ otx_ep_err("mbox send Timeout count:%d", count);
return OTX_EP_MBOX_TIMEOUT_MS;
}
if (rsp->s.type != OTX_EP_MBOX_TYPE_RSP_ACK) {
- otx_ep_err("mbox received NACK from PF\n");
+ otx_ep_err("mbox received NACK from PF");
return OTX_EP_MBOX_CMD_STATUS_NACK;
}
@@ -65,7 +65,7 @@ otx_ep_send_mbox_cmd(struct otx_ep_device *otx_ep,
rte_spinlock_lock(&otx_ep->mbox_lock);
if (otx_ep_cmd_versions[cmd.s.opcode] > otx_ep->mbox_neg_ver) {
- otx_ep_dbg("CMD:%d not supported in Version:%d\n", cmd.s.opcode,
+ otx_ep_dbg("CMD:%d not supported in Version:%d", cmd.s.opcode,
otx_ep->mbox_neg_ver);
rte_spinlock_unlock(&otx_ep->mbox_lock);
return -EOPNOTSUPP;
@@ -92,7 +92,7 @@ otx_ep_mbox_bulk_read(struct otx_ep_device *otx_ep,
/* Send cmd to read data from PF */
ret = __otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("mbox bulk read data request failed\n");
+ otx_ep_err("mbox bulk read data request failed");
rte_spinlock_unlock(&otx_ep->mbox_lock);
return ret;
}
@@ -108,7 +108,7 @@ otx_ep_mbox_bulk_read(struct otx_ep_device *otx_ep,
while (data_len) {
ret = __otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("mbox bulk read data request failed\n");
+ otx_ep_err("mbox bulk read data request failed");
otx_ep->mbox_data_index = 0;
memset(otx_ep->mbox_data_buf, 0, OTX_EP_MBOX_MAX_DATA_BUF_SIZE);
rte_spinlock_unlock(&otx_ep->mbox_lock);
@@ -154,10 +154,10 @@ otx_ep_mbox_set_mtu(struct rte_eth_dev *eth_dev, uint16_t mtu)
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("set MTU failed\n");
+ otx_ep_err("set MTU failed");
return -EINVAL;
}
- otx_ep_dbg("mtu set success mtu %u\n", mtu);
+ otx_ep_dbg("mtu set success mtu %u", mtu);
return 0;
}
@@ -178,10 +178,10 @@ otx_ep_mbox_set_mac_addr(struct rte_eth_dev *eth_dev,
cmd.s_set_mac.mac_addr[i] = mac_addr->addr_bytes[i];
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("set MAC address failed\n");
+ otx_ep_err("set MAC address failed");
return -EINVAL;
}
- otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT,
__func__, RTE_ETHER_ADDR_BYTES(mac_addr));
rte_ether_addr_copy(mac_addr, eth_dev->data->mac_addrs);
return 0;
@@ -201,12 +201,12 @@ otx_ep_mbox_get_mac_addr(struct rte_eth_dev *eth_dev,
cmd.s_set_mac.opcode = OTX_EP_MBOX_CMD_GET_MAC_ADDR;
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("get MAC address failed\n");
+ otx_ep_err("get MAC address failed");
return -EINVAL;
}
for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
mac_addr->addr_bytes[i] = rsp.s_set_mac.mac_addr[i];
- otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT,
__func__, RTE_ETHER_ADDR_BYTES(mac_addr));
return 0;
}
@@ -224,7 +224,7 @@ int otx_ep_mbox_get_link_status(struct rte_eth_dev *eth_dev,
cmd.s_link_status.opcode = OTX_EP_MBOX_CMD_GET_LINK_STATUS;
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("Get link status failed\n");
+ otx_ep_err("Get link status failed");
return -EINVAL;
}
*oper_up = rsp.s_link_status.status;
@@ -242,7 +242,7 @@ int otx_ep_mbox_get_link_info(struct rte_eth_dev *eth_dev,
ret = otx_ep_mbox_bulk_read(otx_ep, OTX_EP_MBOX_CMD_GET_LINK_INFO,
(uint8_t *)&link_info, (int32_t *)&size);
if (ret) {
- otx_ep_err("Get link info failed\n");
+ otx_ep_err("Get link info failed");
return ret;
}
link->link_status = RTE_ETH_LINK_UP;
@@ -310,12 +310,12 @@ int otx_ep_mbox_version_check(struct rte_eth_dev *eth_dev)
* during initialization of PMD driver.
*/
if (ret == OTX_EP_MBOX_CMD_STATUS_NACK || rsp.s_version.version == 0) {
- otx_ep_dbg("VF Mbox version fallback to base version from:%u\n",
+ otx_ep_dbg("VF Mbox version fallback to base version from:%u",
(uint32_t)cmd.s_version.version);
return 0;
}
otx_ep->mbox_neg_ver = (uint32_t)rsp.s_version.version;
- otx_ep_dbg("VF Mbox version:%u Negotiated VF version with PF:%u\n",
+ otx_ep_dbg("VF Mbox version:%u Negotiated VF version with PF:%u",
(uint32_t)cmd.s_version.version,
(uint32_t)rsp.s_version.version);
return 0;
diff --git a/drivers/net/octeon_ep/otx_ep_rxtx.c b/drivers/net/octeon_ep/otx_ep_rxtx.c
index c421ef0a1c..65a1f304e8 100644
--- a/drivers/net/octeon_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeon_ep/otx_ep_rxtx.c
@@ -22,19 +22,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
int ret = 0;
if (mz == NULL) {
- otx_ep_err("Memzone: NULL\n");
+ otx_ep_err("Memzone: NULL");
return;
}
mz_tmp = rte_memzone_lookup(mz->name);
if (mz_tmp == NULL) {
- otx_ep_err("Memzone %s Not Found\n", mz->name);
+ otx_ep_err("Memzone %s Not Found", mz->name);
return;
}
ret = rte_memzone_free(mz);
if (ret)
- otx_ep_err("Memzone free failed : ret = %d\n", ret);
+ otx_ep_err("Memzone free failed : ret = %d", ret);
}
/* Free IQ resources */
@@ -46,7 +46,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
iq = otx_ep->instr_queue[iq_no];
if (iq == NULL) {
- otx_ep_err("Invalid IQ[%d]\n", iq_no);
+ otx_ep_err("Invalid IQ[%d]", iq_no);
return -EINVAL;
}
@@ -68,7 +68,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
otx_ep->nb_tx_queues--;
- otx_ep_info("IQ[%d] is deleted\n", iq_no);
+ otx_ep_info("IQ[%d] is deleted", iq_no);
return 0;
}
@@ -94,7 +94,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
OTX_EP_PCI_RING_ALIGN,
socket_id);
if (iq->iq_mz == NULL) {
- otx_ep_err("IQ[%d] memzone alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] memzone alloc failed", iq_no);
goto iq_init_fail;
}
@@ -102,7 +102,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->base_addr = (uint8_t *)iq->iq_mz->addr;
if (num_descs & (num_descs - 1)) {
- otx_ep_err("IQ[%d] descs not in power of 2\n", iq_no);
+ otx_ep_err("IQ[%d] descs not in power of 2", iq_no);
goto iq_init_fail;
}
@@ -117,7 +117,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
RTE_CACHE_LINE_SIZE,
rte_socket_id());
if (iq->req_list == NULL) {
- otx_ep_err("IQ[%d] req_list alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] req_list alloc failed", iq_no);
goto iq_init_fail;
}
@@ -125,7 +125,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
sg = rte_zmalloc_socket("sg_entry", (OTX_EP_MAX_SG_LISTS * OTX_EP_SG_ENTRY_SIZE),
OTX_EP_SG_ALIGN, rte_socket_id());
if (sg == NULL) {
- otx_ep_err("IQ[%d] sg_entries alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] sg_entries alloc failed", iq_no);
goto iq_init_fail;
}
@@ -133,14 +133,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->req_list[i].finfo.g.sg = sg;
}
- otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d\n",
+ otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d",
iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
iq->nb_desc);
iq->mbuf_list = rte_zmalloc_socket("mbuf_list", (iq->nb_desc * sizeof(struct rte_mbuf *)),
RTE_CACHE_LINE_SIZE, rte_socket_id());
if (!iq->mbuf_list) {
- otx_ep_err("IQ[%d] mbuf_list alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] mbuf_list alloc failed", iq_no);
goto iq_init_fail;
}
@@ -185,12 +185,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
otx_ep->instr_queue[iq_no] = iq;
if (otx_ep_init_instr_queue(otx_ep, iq_no, num_descs, socket_id)) {
- otx_ep_err("IQ init is failed\n");
+ otx_ep_err("IQ init is failed");
goto delete_IQ;
}
otx_ep->nb_tx_queues++;
- otx_ep_info("IQ[%d] is created.\n", iq_no);
+ otx_ep_info("IQ[%d] is created.", iq_no);
return 0;
@@ -233,7 +233,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
droq = otx_ep->droq[oq_no];
if (droq == NULL) {
- otx_ep_err("Invalid droq[%d]\n", oq_no);
+ otx_ep_err("Invalid droq[%d]", oq_no);
return -EINVAL;
}
@@ -253,7 +253,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
otx_ep->nb_rx_queues--;
- otx_ep_info("OQ[%d] is deleted\n", oq_no);
+ otx_ep_info("OQ[%d] is deleted", oq_no);
return 0;
}
@@ -268,7 +268,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
for (idx = 0; idx < droq->nb_desc; idx++) {
buf = rte_pktmbuf_alloc(droq->mpool);
if (buf == NULL) {
- otx_ep_err("OQ buffer alloc failed\n");
+ otx_ep_err("OQ buffer alloc failed");
droq->stats.rx_alloc_failure++;
return -ENOMEM;
}
@@ -296,7 +296,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
uint32_t desc_ring_size;
int ret;
- otx_ep_info("OQ[%d] Init start\n", q_no);
+ otx_ep_info("OQ[%d] Init start", q_no);
droq = otx_ep->droq[q_no];
droq->otx_ep_dev = otx_ep;
@@ -316,23 +316,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
socket_id);
if (droq->desc_ring_mz == NULL) {
- otx_ep_err("OQ:%d desc_ring allocation failed\n", q_no);
+ otx_ep_err("OQ:%d desc_ring allocation failed", q_no);
goto init_droq_fail;
}
droq->desc_ring_dma = droq->desc_ring_mz->iova;
droq->desc_ring = (struct otx_ep_droq_desc *)droq->desc_ring_mz->addr;
- otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
+ otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx",
q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
- otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc);
+ otx_ep_dbg("OQ[%d]: num_desc: %d", q_no, droq->nb_desc);
/* OQ buf_list set up */
droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
(droq->nb_desc * sizeof(struct rte_mbuf *)),
RTE_CACHE_LINE_SIZE, socket_id);
if (droq->recv_buf_list == NULL) {
- otx_ep_err("OQ recv_buf_list alloc failed\n");
+ otx_ep_err("OQ recv_buf_list alloc failed");
goto init_droq_fail;
}
@@ -366,17 +366,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
droq = (struct otx_ep_droq *)rte_zmalloc("otx_ep_OQ",
sizeof(*droq), RTE_CACHE_LINE_SIZE);
if (droq == NULL) {
- otx_ep_err("Droq[%d] Creation Failed\n", oq_no);
+ otx_ep_err("Droq[%d] Creation Failed", oq_no);
return -ENOMEM;
}
otx_ep->droq[oq_no] = droq;
if (otx_ep_init_droq(otx_ep, oq_no, num_descs, desc_size, mpool,
socket_id)) {
- otx_ep_err("Droq[%d] Initialization failed\n", oq_no);
+ otx_ep_err("Droq[%d] Initialization failed", oq_no);
goto delete_OQ;
}
- otx_ep_info("OQ[%d] is created.\n", oq_no);
+ otx_ep_info("OQ[%d] is created.", oq_no);
otx_ep->nb_rx_queues++;
@@ -401,12 +401,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
case OTX_EP_REQTYPE_NORESP_GATHER:
/* This will take care of multiple segments also */
rte_pktmbuf_free(mbuf);
- otx_ep_dbg("IQ buffer freed at idx[%d]\n", idx);
+ otx_ep_dbg("IQ buffer freed at idx[%d]", idx);
break;
case OTX_EP_REQTYPE_NONE:
default:
- otx_ep_info("This iqreq mode is not supported:%d\n", reqtype);
+ otx_ep_info("This iqreq mode is not supported:%d", reqtype);
}
/* Reset the request list at this index */
@@ -568,7 +568,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
num_sg = (frags + mask) / OTX_EP_NUM_SG_PTRS;
if (unlikely(pkt_len > OTX_EP_MAX_PKT_SZ && num_sg > OTX_EP_MAX_SG_LISTS)) {
- otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments\n");
+ otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments");
goto exit;
}
@@ -644,16 +644,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
iqcmd.irh.u64 = rte_bswap64(iqcmd.irh.u64);
#ifdef OTX_EP_IO_DEBUG
- otx_ep_dbg("After swapping\n");
- otx_ep_dbg("Word0 [dptr]: 0x%016lx\n",
+ otx_ep_dbg("After swapping");
+ otx_ep_dbg("Word0 [dptr]: 0x%016lx",
(unsigned long)iqcmd.dptr);
- otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih);
- otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n",
+ otx_ep_dbg("Word1 [ihtx]: 0x%016lx", (unsigned long)iqcmd.ih);
+ otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx",
(unsigned long)iqcmd.pki_ih3);
- otx_ep_dbg("Word3 [rptr]: 0x%016lx\n",
+ otx_ep_dbg("Word3 [rptr]: 0x%016lx",
(unsigned long)iqcmd.rptr);
- otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh);
- otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n",
+ otx_ep_dbg("Word4 [irh]: 0x%016lx", (unsigned long)iqcmd.irh);
+ otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx",
(unsigned long)iqcmd.exhdr[0]);
rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m));
#endif
@@ -726,7 +726,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
if (unlikely(!info->length)) {
int retry = OTX_EP_MAX_DELAYED_PKT_RETRIES;
/* otx_ep_dbg("OCTEON DROQ[%d]: read_idx: %d; Data not ready "
- * "yet, Retry; pending=%lu\n", droq->q_no, droq->read_idx,
+ * "yet, Retry; pending=%lu", droq->q_no, droq->read_idx,
* droq->pkts_pending);
*/
droq->stats.pkts_delayed_data++;
@@ -735,7 +735,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
rte_delay_us_block(50);
}
if (!retry && !info->length) {
- otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!\n",
+ otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!",
droq->q_no, droq->read_idx);
/* May be zero length packet; drop it */
assert(0);
@@ -803,7 +803,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
last_buf = mbuf;
} else {
- otx_ep_err("no buf\n");
+ otx_ep_err("no buf");
assert(0);
}
diff --git a/drivers/net/octeon_ep/otx_ep_vf.c b/drivers/net/octeon_ep/otx_ep_vf.c
index 236b7a874c..7defb0f13d 100644
--- a/drivers/net/octeon_ep/otx_ep_vf.c
+++ b/drivers/net/octeon_ep/otx_ep_vf.c
@@ -142,7 +142,7 @@ otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr +
OTX_EP_R_IN_CNTS(iq_no);
- otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p\n",
+ otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p",
iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
loop = OTX_EP_BUSY_LOOP_COUNT;
@@ -220,14 +220,14 @@ otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0)
return -EIO;
- otx_ep_dbg("OTX_EP_R[%d]_credit:%x\n", oq_no,
+ otx_ep_dbg("OTX_EP_R[%d]_credit:%x", oq_no,
rte_read32(droq->pkts_credit_reg));
/* Clear the OQ_OUT_CNTS doorbell */
reg_val = rte_read32(droq->pkts_sent_reg);
rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);
- otx_ep_dbg("OTX_EP_R[%d]_sent: %x\n", oq_no,
+ otx_ep_dbg("OTX_EP_R[%d]_sent: %x", oq_no,
rte_read32(droq->pkts_sent_reg));
loop = OTX_EP_BUSY_LOOP_COUNT;
@@ -259,7 +259,7 @@ otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
}
if (loop < 0) {
- otx_ep_err("dbell reset failed\n");
+ otx_ep_err("dbell reset failed");
return -EIO;
}
@@ -269,7 +269,7 @@ otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no));
- otx_ep_info("IQ[%d] enable done\n", q_no);
+ otx_ep_info("IQ[%d] enable done", q_no);
return 0;
}
@@ -290,7 +290,7 @@ otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
rte_delay_ms(1);
}
if (loop < 0) {
- otx_ep_err("dbell reset failed\n");
+ otx_ep_err("dbell reset failed");
return -EIO;
}
@@ -299,7 +299,7 @@ otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
reg_val |= 0x1ull;
otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no));
- otx_ep_info("OQ[%d] enable done\n", q_no);
+ otx_ep_info("OQ[%d] enable done", q_no);
return 0;
}
@@ -402,10 +402,10 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
if (otx_ep->conf == NULL) {
otx_ep->conf = otx_ep_get_defconf(otx_ep);
if (otx_ep->conf == NULL) {
- otx_ep_err("OTX_EP VF default config not found\n");
+ otx_ep_err("OTX_EP VF default config not found");
return -ENOENT;
}
- otx_ep_info("Default config is used\n");
+ otx_ep_info("Default config is used");
}
/* Get IOQs (RPVF] count */
@@ -414,7 +414,7 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
otx_ep->sriov_info.rings_per_vf = ((reg_val >> OTX_EP_R_IN_CTL_RPVF_POS)
& OTX_EP_R_IN_CTL_RPVF_MASK);
- otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf);
+ otx_ep_info("OTX_EP RPVF: %d", otx_ep->sriov_info.rings_per_vf);
otx_ep->fn_list.setup_iq_regs = otx_ep_setup_iq_regs;
otx_ep->fn_list.setup_oq_regs = otx_ep_setup_oq_regs;
diff --git a/drivers/net/octeontx/base/octeontx_pkovf.c b/drivers/net/octeontx/base/octeontx_pkovf.c
index 5d445dfb49..7aec84a813 100644
--- a/drivers/net/octeontx/base/octeontx_pkovf.c
+++ b/drivers/net/octeontx/base/octeontx_pkovf.c
@@ -364,7 +364,7 @@ octeontx_pko_chan_stop(struct octeontx_pko_vf_ctl_s *ctl, uint64_t chanid)
res = octeontx_pko_dq_close(dq);
if (res < 0)
- octeontx_log_err("closing DQ%d failed\n", dq);
+ octeontx_log_err("closing DQ%d failed", dq);
dq_cnt++;
dq++;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 2a8378a33e..5f0cd1bb7f 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -1223,7 +1223,7 @@ octeontx_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (dev->data->tx_queues[qid]) {
res = octeontx_dev_tx_queue_stop(dev, qid);
if (res < 0)
- octeontx_log_err("failed stop tx_queue(%d)\n", qid);
+ octeontx_log_err("failed stop tx_queue(%d)", qid);
rte_free(dev->data->tx_queues[qid]);
}
@@ -1342,7 +1342,7 @@ octeontx_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
/* Verify queue index */
if (qidx >= dev->data->nb_rx_queues) {
- octeontx_log_err("QID %d not supported (0 - %d available)\n",
+ octeontx_log_err("QID %d not supported (0 - %d available)",
qidx, (dev->data->nb_rx_queues - 1));
return -ENOTSUP;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index bfec085045..9626c343dc 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -1093,11 +1093,11 @@ set_iface_direction(const char *iface, pcap_t *pcap,
{
const char *direction_str = (direction == PCAP_D_IN) ? "IN" : "OUT";
if (pcap_setdirection(pcap, direction) < 0) {
- PMD_LOG(ERR, "Setting %s pcap direction %s failed - %s\n",
+ PMD_LOG(ERR, "Setting %s pcap direction %s failed - %s",
iface, direction_str, pcap_geterr(pcap));
return -1;
}
- PMD_LOG(INFO, "Setting %s pcap direction %s\n",
+ PMD_LOG(INFO, "Setting %s pcap direction %s",
iface, direction_str);
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 0073dd7405..dc04a52639 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -161,7 +161,7 @@ pfe_recv_pkts_on_intr(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
writel(readl(HIF_INT_ENABLE) | HIF_RXPKT_INT, HIF_INT_ENABLE);
ret = epoll_wait(priv->pfe->hif.epoll_fd, &epoll_ev, 1, ticks);
if (ret < 0 && errno != EINTR)
- PFE_PMD_ERR("epoll_wait fails with %d\n", errno);
+ PFE_PMD_ERR("epoll_wait fails with %d", errno);
}
return work_done;
@@ -338,9 +338,9 @@ pfe_eth_open_cdev(struct pfe_eth_priv_s *priv)
pfe_cdev_fd = open(PFE_CDEV_PATH, O_RDONLY);
if (pfe_cdev_fd < 0) {
- PFE_PMD_WARN("Unable to open PFE device file (%s).\n",
+ PFE_PMD_WARN("Unable to open PFE device file (%s).",
PFE_CDEV_PATH);
- PFE_PMD_WARN("Link status update will not be available.\n");
+ PFE_PMD_WARN("Link status update will not be available.");
priv->link_fd = PFE_CDEV_INVALID_FD;
return -1;
}
@@ -582,16 +582,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
ret = ioctl(priv->link_fd, ioctl_cmd, &lstatus);
if (ret != 0) {
- PFE_PMD_ERR("Unable to fetch link status (ioctl)\n");
+ PFE_PMD_ERR("Unable to fetch link status (ioctl)");
return -1;
}
- PFE_PMD_DEBUG("Fetched link state (%d) for dev %d.\n",
+ PFE_PMD_DEBUG("Fetched link state (%d) for dev %d.",
lstatus, priv->id);
}
if (old.link_status == lstatus) {
/* no change in status */
- PFE_PMD_DEBUG("No change in link status; Not updating.\n");
+ PFE_PMD_DEBUG("No change in link status; Not updating.");
return -1;
}
@@ -602,7 +602,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
pfe_eth_atomic_write_link_status(dev, &link);
- PFE_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ PFE_PMD_INFO("Port (%d) link is %s", dev->data->port_id,
link.link_status ? "up" : "down");
return 0;
@@ -992,24 +992,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
addr = of_get_address(np, 0, &cbus_size, NULL);
if (!addr) {
- PFE_PMD_ERR("of_get_address cannot return qman address\n");
+ PFE_PMD_ERR("of_get_address cannot return qman address");
goto err;
}
cbus_addr = of_translate_address(np, addr);
if (!cbus_addr) {
- PFE_PMD_ERR("of_translate_address failed\n");
+ PFE_PMD_ERR("of_translate_address failed");
goto err;
}
addr = of_get_address(np, 1, &ddr_size, NULL);
if (!addr) {
- PFE_PMD_ERR("of_get_address cannot return qman address\n");
+ PFE_PMD_ERR("of_get_address cannot return qman address");
goto err;
}
g_pfe->ddr_phys_baseaddr = of_translate_address(np, addr);
if (!g_pfe->ddr_phys_baseaddr) {
- PFE_PMD_ERR("of_translate_address failed\n");
+ PFE_PMD_ERR("of_translate_address failed");
goto err;
}
diff --git a/drivers/net/pfe/pfe_hif.c b/drivers/net/pfe/pfe_hif.c
index e2b23bbeb7..abb9cde996 100644
--- a/drivers/net/pfe/pfe_hif.c
+++ b/drivers/net/pfe/pfe_hif.c
@@ -309,7 +309,7 @@ client_put_rxpacket(struct hif_rx_queue *queue,
if (readl(&desc->ctrl) & CL_DESC_OWN) {
mbuf = rte_cpu_to_le_64(rte_pktmbuf_alloc(pool));
if (unlikely(!mbuf)) {
- PFE_PMD_WARN("Buffer allocation failure\n");
+ PFE_PMD_WARN("Buffer allocation failure");
return NULL;
}
@@ -770,9 +770,9 @@ pfe_hif_rx_idle(struct pfe_hif *hif)
} while (--hif_stop_loop);
if (readl(HIF_RX_STATUS) & BDP_CSR_RX_DMA_ACTV)
- PFE_PMD_ERR("Failed\n");
+ PFE_PMD_ERR("Failed");
else
- PFE_PMD_INFO("Done\n");
+ PFE_PMD_INFO("Done");
}
#endif
@@ -806,7 +806,7 @@ pfe_hif_init(struct pfe *pfe)
pfe_cdev_fd = open(PFE_CDEV_PATH, O_RDWR);
if (pfe_cdev_fd < 0) {
- PFE_PMD_WARN("Unable to open PFE device file (%s).\n",
+ PFE_PMD_WARN("Unable to open PFE device file (%s).",
PFE_CDEV_PATH);
pfe->cdev_fd = PFE_CDEV_INVALID_FD;
return -1;
@@ -817,7 +817,7 @@ pfe_hif_init(struct pfe *pfe)
/* hif interrupt enable */
err = ioctl(pfe->cdev_fd, PFE_CDEV_HIF_INTR_EN, &event_fd);
if (err) {
- PFE_PMD_ERR("\nioctl failed for intr enable err: %d\n",
+ PFE_PMD_ERR("ioctl failed for intr enable err: %d",
errno);
goto err0;
}
@@ -826,7 +826,7 @@ pfe_hif_init(struct pfe *pfe)
epoll_ev.data.fd = event_fd;
err = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, event_fd, &epoll_ev);
if (err < 0) {
- PFE_PMD_ERR("epoll_ctl failed with err = %d\n", errno);
+ PFE_PMD_ERR("epoll_ctl failed with err = %d", errno);
goto err0;
}
pfe->hif.epoll_fd = epoll_fd;
diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c
index 6fe6d33d23..541ba365c6 100644
--- a/drivers/net/pfe/pfe_hif_lib.c
+++ b/drivers/net/pfe/pfe_hif_lib.c
@@ -157,7 +157,7 @@ hif_lib_client_init_rx_buffers(struct hif_client_s *client,
queue->queue_id = 0;
queue->port_id = client->port_id;
queue->priv = client->priv;
- PFE_PMD_DEBUG("rx queue: %d, base: %p, size: %d\n", qno,
+ PFE_PMD_DEBUG("rx queue: %d, base: %p, size: %d", qno,
queue->base, queue->size);
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c35585f5fd..dcc8cbe943 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -887,7 +887,7 @@ qede_free_tx_pkt(struct qede_tx_queue *txq)
mbuf = txq->sw_tx_ring[idx];
if (mbuf) {
nb_segs = mbuf->nb_segs;
- PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
+ PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u", nb_segs);
while (nb_segs) {
/* It's like consuming rxbuf in recv() */
ecore_chain_consume(&txq->tx_pbl);
@@ -897,7 +897,7 @@ qede_free_tx_pkt(struct qede_tx_queue *txq)
rte_pktmbuf_free(mbuf);
txq->sw_tx_ring[idx] = NULL;
txq->sw_tx_cons++;
- PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
+ PMD_TX_LOG(DEBUG, txq, "Freed tx packet");
} else {
ecore_chain_consume(&txq->tx_pbl);
txq->nb_tx_avail++;
@@ -919,7 +919,7 @@ qede_process_tx_compl(__rte_unused struct ecore_dev *edev,
#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
- PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+ PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u",
abs(hw_bd_cons - sw_tx_cons));
#endif
while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl))
@@ -1353,7 +1353,7 @@ qede_rx_process_tpa_cmn_cont_end_cqe(__rte_unused struct qede_dev *qdev,
tpa_info->tpa_tail = curr_frag;
qede_rx_bd_ring_consume(rxq);
if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
- PMD_RX_LOG(ERR, rxq, "mbuf allocation fails\n");
+ PMD_RX_LOG(ERR, rxq, "mbuf allocation fails");
rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
rxq->rx_alloc_errors++;
}
@@ -1365,7 +1365,7 @@ qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
struct qede_rx_queue *rxq,
struct eth_fast_path_rx_tpa_cont_cqe *cqe)
{
- PMD_RX_LOG(INFO, rxq, "TPA cont[%d] - len [%d]\n",
+ PMD_RX_LOG(INFO, rxq, "TPA cont[%d] - len [%d]",
cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]));
/* only len_list[0] will have value */
qede_rx_process_tpa_cmn_cont_end_cqe(qdev, rxq, cqe->tpa_agg_index,
@@ -1388,7 +1388,7 @@ qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
rx_mb->pkt_len = cqe->total_packet_len;
PMD_RX_LOG(INFO, rxq, "TPA End[%d] reason %d cqe_len %d nb_segs %d"
- " pkt_len %d\n", cqe->tpa_agg_index, cqe->end_reason,
+ " pkt_len %d", cqe->tpa_agg_index, cqe->end_reason,
rte_le_to_cpu_16(cqe->len_list[0]), rx_mb->nb_segs,
rx_mb->pkt_len);
}
@@ -1471,7 +1471,7 @@ qede_process_sg_pkts(void *p_rxq, struct rte_mbuf *rx_mb,
pkt_len;
if (unlikely(!cur_size)) {
PMD_RX_LOG(ERR, rxq, "Length is 0 while %u BDs"
- " left for mapping jumbo\n", num_segs);
+ " left for mapping jumbo", num_segs);
qede_recycle_rx_bd_ring(rxq, qdev, num_segs);
return -EINVAL;
}
@@ -1497,7 +1497,7 @@ print_rx_bd_info(struct rte_mbuf *m, struct qede_rx_queue *rxq,
PMD_RX_LOG(INFO, rxq,
"len 0x%04x bf 0x%04x hash_val 0x%x"
" ol_flags 0x%04lx l2=%s l3=%s l4=%s tunn=%s"
- " inner_l2=%s inner_l3=%s inner_l4=%s\n",
+ " inner_l2=%s inner_l3=%s inner_l4=%s",
m->data_len, bitfield, m->hash.rss,
(unsigned long)m->ol_flags,
rte_get_ptype_l2_name(m->packet_type),
@@ -1548,7 +1548,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
PMD_RX_LOG(ERR, rxq,
"New buffers allocation failed,"
- "dropping incoming packets\n");
+ "dropping incoming packets");
dev = &rte_eth_devices[rxq->port_id];
dev->data->rx_mbuf_alloc_failed += count;
rxq->rx_alloc_errors += count;
@@ -1579,13 +1579,13 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
cqe =
(union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
cqe_type = cqe->fast_path_regular.type;
- PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+ PMD_RX_LOG(INFO, rxq, "Rx CQE type %d", cqe_type);
if (likely(cqe_type == ETH_RX_CQE_TYPE_REGULAR)) {
fp_cqe = &cqe->fast_path_regular;
} else {
if (cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH) {
- PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
+ PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE");
ecore_eth_cqe_completion
(&edev->hwfns[rxq->queue_id %
edev->num_hwfns],
@@ -1611,10 +1611,10 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
#endif
if (unlikely(qede_tunn_exist(parse_flag))) {
- PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
+ PMD_RX_LOG(INFO, rxq, "Rx tunneled packet");
if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1624,7 +1624,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "Outer L3 csum failed, flags = 0x%x\n",
+ "Outer L3 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
@@ -1659,7 +1659,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1667,7 +1667,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
}
if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
- PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
+ PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
@@ -1776,7 +1776,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
PMD_RX_LOG(ERR, rxq,
"New buffers allocation failed,"
- "dropping incoming packets\n");
+ "dropping incoming packets");
dev = &rte_eth_devices[rxq->port_id];
dev->data->rx_mbuf_alloc_failed += count;
rxq->rx_alloc_errors += count;
@@ -1805,7 +1805,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
cqe =
(union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
cqe_type = cqe->fast_path_regular.type;
- PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+ PMD_RX_LOG(INFO, rxq, "Rx CQE type %d", cqe_type);
switch (cqe_type) {
case ETH_RX_CQE_TYPE_REGULAR:
@@ -1823,7 +1823,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
PMD_RX_LOG(INFO, rxq,
"TPA start[%d] - len_on_first_bd %d header %d"
- " [bd_list[0] %d], [seg_len %d]\n",
+ " [bd_list[0] %d], [seg_len %d]",
cqe_start_tpa->tpa_agg_index,
rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
cqe_start_tpa->header_len,
@@ -1843,7 +1843,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rx_mb = rxq->tpa_info[tpa_agg_idx].tpa_head;
goto tpa_end;
case ETH_RX_CQE_TYPE_SLOW_PATH:
- PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
+ PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE");
ecore_eth_cqe_completion(
&edev->hwfns[rxq->queue_id % edev->num_hwfns],
(struct eth_slow_path_rx_cqe *)cqe);
@@ -1881,10 +1881,10 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rss_hash = rte_le_to_cpu_32(cqe_start_tpa->rss_hash);
}
if (qede_tunn_exist(parse_flag)) {
- PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
+ PMD_RX_LOG(INFO, rxq, "Rx tunneled packet");
if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1894,7 +1894,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "Outer L3 csum failed, flags = 0x%x\n",
+ "Outer L3 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
@@ -1933,7 +1933,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1941,7 +1941,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
}
if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
- PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
+ PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
@@ -2117,13 +2117,13 @@ print_tx_bd_info(struct qede_tx_queue *txq,
rte_cpu_to_le_16(bd1->data.bitfields));
if (bd2)
PMD_TX_LOG(INFO, txq,
- "BD2: nbytes=0x%04x bf1=0x%04x bf2=0x%04x tunn_ip=0x%04x\n",
+ "BD2: nbytes=0x%04x bf1=0x%04x bf2=0x%04x tunn_ip=0x%04x",
rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1,
bd2->data.bitfields2, bd2->data.tunn_ip_size);
if (bd3)
PMD_TX_LOG(INFO, txq,
"BD3: nbytes=0x%04x bf=0x%04x MSS=0x%04x "
- "tunn_l4_hdr_start_offset_w=0x%04x tunn_hdr_size=0x%04x\n",
+ "tunn_l4_hdr_start_offset_w=0x%04x tunn_hdr_size=0x%04x",
rte_cpu_to_le_16(bd3->nbytes),
rte_cpu_to_le_16(bd3->data.bitfields),
rte_cpu_to_le_16(bd3->data.lso_mss),
@@ -2131,7 +2131,7 @@ print_tx_bd_info(struct qede_tx_queue *txq,
bd3->data.tunn_hdr_size_w);
rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
- PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+ PMD_TX_LOG(INFO, txq, "TX offloads = %s", ol_buf);
}
#endif
@@ -2201,7 +2201,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
if (unlikely(i != nb_pkts))
- PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+ PMD_TX_LOG(ERR, txq, "TX prepare failed for %u",
nb_pkts - i);
#endif
return i;
@@ -2215,16 +2215,16 @@ qede_mpls_tunn_tx_sanity_check(struct rte_mbuf *mbuf,
struct qede_tx_queue *txq)
{
if (((mbuf->outer_l2_len + mbuf->outer_l3_len) / 2) > 0xff)
- PMD_TX_LOG(ERR, txq, "tunn_l4_hdr_start_offset overflow\n");
+ PMD_TX_LOG(ERR, txq, "tunn_l4_hdr_start_offset overflow");
if (((mbuf->outer_l2_len + mbuf->outer_l3_len +
MPLSINUDP_HDR_SIZE) / 2) > 0xff)
- PMD_TX_LOG(ERR, txq, "tunn_hdr_size overflow\n");
+ PMD_TX_LOG(ERR, txq, "tunn_hdr_size overflow");
if (((mbuf->l2_len - MPLSINUDP_HDR_SIZE) / 2) >
ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_MASK)
- PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow\n");
+ PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow");
if (((mbuf->l2_len - MPLSINUDP_HDR_SIZE + mbuf->l3_len) / 2) >
ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_MASK)
- PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow\n");
+ PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow");
}
#endif
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ba2ef4058e..ee563c55ce 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1817,7 +1817,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
/* Apply new link configurations if changed */
ret = nicvf_apply_link_speed(dev);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to set link configuration\n");
+ PMD_INIT_LOG(ERR, "Failed to set link configuration");
return ret;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index ad29c3cfec..a8bdc10232 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -612,7 +612,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
ssid = txgbe_flash_read_dword(hw, 0xFFFDC);
if (ssid == 0x1) {
PMD_INIT_LOG(ERR,
- "Read of internal subsystem device id failed\n");
+ "Read of internal subsystem device id failed");
return -ENODEV;
}
hw->subsystem_device_id = (u16)ssid >> 8 | (u16)ssid << 8;
@@ -2756,7 +2756,7 @@ txgbe_dev_detect_sfp(void *param)
PMD_DRV_LOG(INFO, "SFP not present.");
} else if (err == 0) {
hw->mac.setup_sfp(hw);
- PMD_DRV_LOG(INFO, "detected SFP+: %d\n", hw->phy.sfp_type);
+ PMD_DRV_LOG(INFO, "detected SFP+: %d", hw->phy.sfp_type);
txgbe_dev_setup_link_alarm_handler(dev);
txgbe_dev_link_update(dev, 0);
}
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index f9f8108fb8..4af49dd802 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -100,7 +100,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
/* Fail if no match and no free entries*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx IP table\n");
+ "No free entry left in the Rx IP table");
return -1;
}
@@ -114,7 +114,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx SA table\n");
+ "No free entry left in the Rx SA table");
return -1;
}
@@ -210,7 +210,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Tx SA table\n");
+ "No free entry left in the Tx SA table");
return -1;
}
@@ -269,7 +269,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx IP table\n");
+ "Entry not found in the Rx IP table");
return -1;
}
@@ -284,7 +284,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx SA table\n");
+ "Entry not found in the Rx SA table");
return -1;
}
@@ -329,7 +329,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Tx SA table\n");
+ "Entry not found in the Tx SA table");
return -1;
}
reg_val = TXGBE_IPSRXIDX_WRITE | (sa_index << 3);
@@ -359,7 +359,7 @@ txgbe_crypto_create_session(void *device,
if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
conf->crypto_xform->aead.algo !=
RTE_CRYPTO_AEAD_AES_GCM) {
- PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode");
return -ENOTSUP;
}
aead_xform = &conf->crypto_xform->aead;
@@ -368,14 +368,14 @@ txgbe_crypto_create_session(void *device,
if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled");
return -ENOTSUP;
}
} else {
if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled");
return -ENOTSUP;
}
}
@@ -389,7 +389,7 @@ txgbe_crypto_create_session(void *device,
if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
if (txgbe_crypto_add_sa(ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ PMD_DRV_LOG(ERR, "Failed to add SA");
return -EPERM;
}
}
@@ -411,12 +411,12 @@ txgbe_crypto_remove_session(void *device,
struct txgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);
if (eth_dev != ic_session->dev) {
- PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ PMD_DRV_LOG(ERR, "Session not bound to this device");
return -ENODEV;
}
if (txgbe_crypto_remove_sa(eth_dev, ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ PMD_DRV_LOG(ERR, "Failed to remove session");
return -EFAULT;
}
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 176f79005c..700632bd88 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -85,7 +85,7 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
sizeof(struct txgbe_vf_info) * vf_num, 0);
if (*vfinfo == NULL) {
PMD_INIT_LOG(ERR,
- "Cannot allocate memory for private VF data\n");
+ "Cannot allocate memory for private VF data");
return -ENOMEM;
}
@@ -167,14 +167,14 @@ txgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
struct txgbe_ethertype_filter ethertype_filter;
if (!hw->mac.set_ethertype_anti_spoofing) {
- PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.\n");
+ PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.");
return;
}
i = txgbe_ethertype_filter_lookup(filter_info,
TXGBE_ETHERTYPE_FLOW_CTRL);
if (i >= 0) {
- PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!\n");
+ PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!");
return;
}
@@ -187,7 +187,7 @@ txgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
i = txgbe_ethertype_filter_insert(filter_info,
ðertype_filter);
if (i < 0) {
- PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.\n");
+ PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.");
return;
}
@@ -408,7 +408,7 @@ txgbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)
vmolr = rd32(hw, TXGBE_POOLETHCTL(vf));
- PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);
vmolr &= ~TXGBE_POOLETHCTL_MCP;
@@ -570,7 +570,7 @@ txgbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
break;
}
- PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
api_version, vf);
return -1;
@@ -614,7 +614,7 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
case RTE_ETH_MQ_TX_NONE:
case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
- ", but its tx mode = %d\n", vf,
+ ", but its tx mode = %d", vf,
eth_conf->txmode.mq_mode);
return -1;
@@ -648,7 +648,7 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
break;
default:
- PMD_DRV_LOG(ERR, "PF work with invalid mode = %d\n",
+ PMD_DRV_LOG(ERR, "PF work with invalid mode = %d",
eth_conf->txmode.mq_mode);
return -1;
}
@@ -704,7 +704,7 @@ txgbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
if (!(fctrl & TXGBE_PSRCTL_UCP)) {
/* VF promisc requires PF in promisc */
PMD_DRV_LOG(ERR,
- "Enabling VF promisc requires PF in promisc\n");
+ "Enabling VF promisc requires PF in promisc");
return -1;
}
@@ -741,7 +741,7 @@ txgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (index) {
if (!rte_is_valid_assigned_ether_addr(ea)) {
- PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
return -1;
}
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 1bfd6aba80..d93d443ec9 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -1088,7 +1088,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
scvq = virtqueue_alloc(&dev->hw, vq->vq_queue_index, vq->vq_nentries,
VTNET_CQ, SOCKET_ID_ANY, name);
if (!scvq) {
- PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq\n", dev->path);
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq", dev->path);
return -ENOMEM;
}
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 70ae9c6035..f98cdb6d58 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1094,10 +1094,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret != 0)
PMD_INIT_LOG(DEBUG,
- "Failed in setup memory region cmd\n");
+ "Failed in setup memory region cmd");
ret = 0;
} else {
- PMD_INIT_LOG(DEBUG, "Failed to setup memory region\n");
+ PMD_INIT_LOG(DEBUG, "Failed to setup memory region");
}
} else {
PMD_INIT_LOG(WARNING, "Memregs can't init (rx: %d, tx: %d)",
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 380f41f98b..e226641fdf 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1341,7 +1341,7 @@ vmxnet3_dev_rxtx_init(struct rte_eth_dev *dev)
/* Zero number of descriptors in the configuration of the RX queue */
if (ret == 0) {
PMD_INIT_LOG(ERR,
- "Invalid configuration in Rx queue: %d, buffers ring: %d\n",
+ "Invalid configuration in Rx queue: %d, buffers ring: %d",
i, j);
return -EINVAL;
}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index aeee4ac289..de8c024abb 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -68,7 +68,7 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_CMDIF_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -99,14 +99,14 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
do {
ret = qbman_swp_enqueue_multiple(swp, &eqdesc, &fd, NULL, 1);
if (ret < 0 && ret != -EBUSY)
- DPAA2_CMDIF_ERR("Transmit failure with err: %d\n", ret);
+ DPAA2_CMDIF_ERR("Transmit failure with err: %d", ret);
retry_count++;
} while ((ret == -EBUSY) && (retry_count < DPAA2_MAX_TX_RETRY_COUNT));
if (ret < 0)
return ret;
- DPAA2_CMDIF_DP_DEBUG("Successfully transmitted a packet\n");
+ DPAA2_CMDIF_DP_DEBUG("Successfully transmitted a packet");
return 1;
}
@@ -133,7 +133,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_CMDIF_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -152,7 +152,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
while (1) {
if (qbman_swp_pull(swp, &pulldesc)) {
- DPAA2_CMDIF_DP_WARN("VDQ cmd not issued. QBMAN is busy\n");
+ DPAA2_CMDIF_DP_WARN("VDQ cmd not issued. QBMAN is busy");
/* Portal was busy, try again */
continue;
}
@@ -169,7 +169,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
/* Check for valid frame. */
status = (uint8_t)qbman_result_DQ_flags(dq_storage);
if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
- DPAA2_CMDIF_DP_DEBUG("No frame is delivered\n");
+ DPAA2_CMDIF_DP_DEBUG("No frame is delivered");
return 0;
}
@@ -181,7 +181,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
cmdif_rcv_cnxt->flc = DPAA2_GET_FD_FLC(fd);
cmdif_rcv_cnxt->frc = DPAA2_GET_FD_FRC(fd);
- DPAA2_CMDIF_DP_DEBUG("packet received\n");
+ DPAA2_CMDIF_DP_DEBUG("packet received");
return 1;
}
diff --git a/drivers/raw/ifpga/afu_pmd_n3000.c b/drivers/raw/ifpga/afu_pmd_n3000.c
index 67b3941265..6aae1b224e 100644
--- a/drivers/raw/ifpga/afu_pmd_n3000.c
+++ b/drivers/raw/ifpga/afu_pmd_n3000.c
@@ -1506,7 +1506,7 @@ static int dma_afu_set_irqs(struct afu_rawdev *dev, uint32_t vec_start,
rte_memcpy(&irq_set->data, efds, sizeof(*efds) * count);
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- IFPGA_RAWDEV_PMD_ERR("Error enabling MSI-X interrupts\n");
+ IFPGA_RAWDEV_PMD_ERR("Error enabling MSI-X interrupts");
rte_free(irq_set);
return ret;
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index f89bd3f9e2..997fbf8a0d 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -383,7 +383,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
goto fail;
if (value == 0xdeadbeef) {
- IFPGA_RAWDEV_PMD_DEBUG("dev_id %d sensor %s value %x\n",
+ IFPGA_RAWDEV_PMD_DEBUG("dev_id %d sensor %s value %x",
raw_dev->dev_id, sensor->name, value);
continue;
}
@@ -391,13 +391,13 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
/* monitor temperature sensors */
if (!strcmp(sensor->name, "Board Temperature") ||
!strcmp(sensor->name, "FPGA Die Temperature")) {
- IFPGA_RAWDEV_PMD_DEBUG("read sensor %s %d %d %d\n",
+ IFPGA_RAWDEV_PMD_DEBUG("read sensor %s %d %d %d",
sensor->name, value, sensor->high_warn,
sensor->high_fatal);
if (HIGH_WARN(sensor, value) ||
LOW_WARN(sensor, value)) {
- IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d\n",
+ IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d",
sensor->name, value);
*gsd_start = true;
break;
@@ -408,7 +408,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
if (!strcmp(sensor->name, "12V AUX Voltage")) {
if (value < AUX_VOLTAGE_WARN) {
IFPGA_RAWDEV_PMD_INFO(
- "%s reach threshold %d mV\n",
+ "%s reach threshold %d mV",
sensor->name, value);
*gsd_start = true;
break;
@@ -444,7 +444,7 @@ static int set_surprise_link_check_aer(
if (ifpga_monitor_sensor(rdev, &enable))
return -EFAULT;
if (enable || force_disable) {
- IFPGA_RAWDEV_PMD_ERR("Set AER, pls graceful shutdown\n");
+ IFPGA_RAWDEV_PMD_ERR("Set AER, pls graceful shutdown");
ifpga_rdev->aer_enable = 1;
/* get bridge fd */
strlcpy(path, "/sys/bus/pci/devices/", sizeof(path));
@@ -660,7 +660,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
continue;
if (ifpga_fill_afu_dev(acc, afu_dev)) {
- IFPGA_RAWDEV_PMD_ERR("cannot get info\n");
+ IFPGA_RAWDEV_PMD_ERR("cannot get info");
return -ENOENT;
}
}
@@ -815,13 +815,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
ret = opae_manager_flash(mgr, port_id, buffer, size, status);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("%s pr error %d\n", __func__, ret);
+ IFPGA_RAWDEV_PMD_ERR("%s pr error %d", __func__, ret);
return ret;
}
ret = opae_bridge_reset(br);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d\n",
+ IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d",
__func__, port_id, ret);
return ret;
}
@@ -845,14 +845,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
file_fd = open(file_name, O_RDONLY);
if (file_fd < 0) {
- IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s\n",
+ IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s",
__func__, file_name);
- IFPGA_RAWDEV_PMD_ERR("Message : %s\n", strerror(errno));
+ IFPGA_RAWDEV_PMD_ERR("Message : %s", strerror(errno));
return -EINVAL;
}
ret = stat(file_name, &file_stat);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s\n",
+ IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s",
file_name);
ret = -EINVAL;
goto close_fd;
@@ -863,7 +863,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
goto close_fd;
}
- IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu\n", buffer_size);
+ IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu", buffer_size);
buffer = rte_malloc(NULL, buffer_size, 0);
if (!buffer) {
ret = -ENOMEM;
@@ -879,7 +879,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
/*do PR now*/
ret = fpga_pr(rawdev, port_id, buffer, buffer_size, &pr_error);
- IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.\n", port_id,
+ IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.", port_id,
ret ? "failed" : "success");
if (ret) {
ret = -EINVAL;
@@ -922,7 +922,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
afu_pr_conf->afu_id.port,
afu_pr_conf->bs_path);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("do pr error %d\n", ret);
+ IFPGA_RAWDEV_PMD_ERR("do pr error %d", ret);
return ret;
}
}
@@ -953,7 +953,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
rte_memcpy(&afu_pr_conf->afu_id.uuid.uuid_high, uuid.b + 8,
sizeof(u64));
- IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx\n",
+ IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx",
__func__,
(unsigned long)afu_pr_conf->afu_id.uuid.uuid_low,
(unsigned long)afu_pr_conf->afu_id.uuid.uuid_high);
@@ -1229,13 +1229,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
if (ret)
return -EINVAL;
- IFPGA_RAWDEV_PMD_INFO("seu emr low: 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_INFO("seu emr low: 0x%" PRIx64, val);
ret = ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_SEU_EMR_HIGH, &val);
if (ret)
return -EINVAL;
- IFPGA_RAWDEV_PMD_INFO("seu emr high: 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_INFO("seu emr high: 0x%" PRIx64, val);
return 0;
}
@@ -1250,7 +1250,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_NONFATAL_ERRORS, &val))
return -EINVAL;
if ((val & 0x40) != 0)
- IFPGA_RAWDEV_PMD_INFO("clean not done\n");
+ IFPGA_RAWDEV_PMD_INFO("clean not done");
return 0;
}
@@ -1262,14 +1262,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_ERRORS, &val))
return -EINVAL;
- IFPGA_RAWDEV_PMD_DEBUG("before clean 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_DEBUG("before clean 0x%" PRIx64, val);
ifpga_set_fme_error_prop(mgr, FME_ERR_PROP_CLEAR, val);
if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_ERRORS, &val))
return -EINVAL;
- IFPGA_RAWDEV_PMD_DEBUG("after clean 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_DEBUG("after clean 0x%" PRIx64, val);
return 0;
}
@@ -1289,15 +1289,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
fme_error0.csr = val;
if (fme_error0.fabric_err)
- IFPGA_RAWDEV_PMD_ERR("Fabric error\n");
+ IFPGA_RAWDEV_PMD_ERR("Fabric error");
else if (fme_error0.fabfifo_overflow)
- IFPGA_RAWDEV_PMD_ERR("Fabric fifo under/overflow error\n");
+ IFPGA_RAWDEV_PMD_ERR("Fabric fifo under/overflow error");
else if (fme_error0.afu_acc_mode_err)
- IFPGA_RAWDEV_PMD_ERR("AFU PF/VF access mismatch detected\n");
+ IFPGA_RAWDEV_PMD_ERR("AFU PF/VF access mismatch detected");
else if (fme_error0.pcie0cdc_parity_err)
- IFPGA_RAWDEV_PMD_ERR("PCIe0 CDC Parity Error\n");
+ IFPGA_RAWDEV_PMD_ERR("PCIe0 CDC Parity Error");
else if (fme_error0.cvlcdc_parity_err)
- IFPGA_RAWDEV_PMD_ERR("CVL CDC Parity Error\n");
+ IFPGA_RAWDEV_PMD_ERR("CVL CDC Parity Error");
else if (fme_error0.fpgaseuerr)
fme_err_read_seu_emr(mgr);
@@ -1320,17 +1320,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
fme_catfatal.csr = val;
if (fme_catfatal.cci_fatal_err)
- IFPGA_RAWDEV_PMD_ERR("CCI error detected\n");
+ IFPGA_RAWDEV_PMD_ERR("CCI error detected");
else if (fme_catfatal.fabric_fatal_err)
- IFPGA_RAWDEV_PMD_ERR("Fabric fatal error detected\n");
+ IFPGA_RAWDEV_PMD_ERR("Fabric fatal error detected");
else if (fme_catfatal.pcie_poison_err)
- IFPGA_RAWDEV_PMD_ERR("Poison error from PCIe ports\n");
+ IFPGA_RAWDEV_PMD_ERR("Poison error from PCIe ports");
else if (fme_catfatal.inject_fata_err)
- IFPGA_RAWDEV_PMD_ERR("Injected Fatal Error\n");
+ IFPGA_RAWDEV_PMD_ERR("Injected Fatal Error");
else if (fme_catfatal.crc_catast_err)
- IFPGA_RAWDEV_PMD_ERR("a catastrophic EDCRC error\n");
+ IFPGA_RAWDEV_PMD_ERR("a catastrophic EDCRC error");
else if (fme_catfatal.injected_catast_err)
- IFPGA_RAWDEV_PMD_ERR("Injected Catastrophic Error\n");
+ IFPGA_RAWDEV_PMD_ERR("Injected Catastrophic Error");
else if (fme_catfatal.bmc_seu_catast_err)
fme_err_read_seu_emr(mgr);
@@ -1349,28 +1349,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
nonfaterr.csr = val;
if (nonfaterr.temp_thresh_ap1)
- IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP1\n");
+ IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP1");
else if (nonfaterr.temp_thresh_ap2)
- IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP2\n");
+ IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP2");
else if (nonfaterr.pcie_error)
- IFPGA_RAWDEV_PMD_INFO("an error has occurred in pcie\n");
+ IFPGA_RAWDEV_PMD_INFO("an error has occurred in pcie");
else if (nonfaterr.portfatal_error)
- IFPGA_RAWDEV_PMD_INFO("fatal error occurred in AFU port.\n");
+ IFPGA_RAWDEV_PMD_INFO("fatal error occurred in AFU port.");
else if (nonfaterr.proc_hot)
- IFPGA_RAWDEV_PMD_INFO("a ProcHot event\n");
+ IFPGA_RAWDEV_PMD_INFO("a ProcHot event");
else if (nonfaterr.afu_acc_mode_err)
- IFPGA_RAWDEV_PMD_INFO("an AFU PF/VF access mismatch\n");
+ IFPGA_RAWDEV_PMD_INFO("an AFU PF/VF access mismatch");
else if (nonfaterr.injected_nonfata_err) {
- IFPGA_RAWDEV_PMD_INFO("Injected Warning Error\n");
+ IFPGA_RAWDEV_PMD_INFO("Injected Warning Error");
fme_clear_warning_intr(mgr);
} else if (nonfaterr.temp_thresh_AP6)
- IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP6\n");
+ IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP6");
else if (nonfaterr.power_thresh_AP1)
- IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP1\n");
+ IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP1");
else if (nonfaterr.power_thresh_AP2)
- IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP2\n");
+ IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP2");
else if (nonfaterr.mbp_err)
- IFPGA_RAWDEV_PMD_INFO("an MBP event\n");
+ IFPGA_RAWDEV_PMD_INFO("an MBP event");
return 0;
}
@@ -1380,7 +1380,7 @@ fme_interrupt_handler(void *param)
{
struct opae_manager *mgr = (struct opae_manager *)param;
- IFPGA_RAWDEV_PMD_INFO("%s interrupt occurred\n", __func__);
+ IFPGA_RAWDEV_PMD_INFO("%s interrupt occurred", __func__);
fme_err_handle_error0(mgr);
fme_err_handle_nonfaterror(mgr);
@@ -1406,7 +1406,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
return -EINVAL;
if ((*intr_handle) == NULL) {
- IFPGA_RAWDEV_PMD_ERR("%s interrupt %d not registered\n",
+ IFPGA_RAWDEV_PMD_ERR("%s interrupt %d not registered",
type == IFPGA_FME_IRQ ? "FME" : "AFU",
type == IFPGA_FME_IRQ ? 0 : vec_start);
return -ENOENT;
@@ -1416,7 +1416,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
rc = rte_intr_callback_unregister(*intr_handle, handler, arg);
if (rc < 0) {
- IFPGA_RAWDEV_PMD_ERR("Failed to unregister %s interrupt %d\n",
+ IFPGA_RAWDEV_PMD_ERR("Failed to unregister %s interrupt %d",
type == IFPGA_FME_IRQ ? "FME" : "AFU",
type == IFPGA_FME_IRQ ? 0 : vec_start);
} else {
@@ -1479,7 +1479,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
rte_intr_efds_index_get(*intr_handle, 0)))
return -rte_errno;
- IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
+ IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d",
name, rte_intr_dev_fd_get(*intr_handle),
rte_intr_fd_get(*intr_handle));
@@ -1520,7 +1520,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
return -EINVAL;
}
- IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ IFPGA_RAWDEV_PMD_INFO("success register %s interrupt", name);
free(intr_efds);
return 0;
diff --git a/drivers/regex/cn9k/cn9k_regexdev.c b/drivers/regex/cn9k/cn9k_regexdev.c
index e96cbf4141..aa809ab5bf 100644
--- a/drivers/regex/cn9k/cn9k_regexdev.c
+++ b/drivers/regex/cn9k/cn9k_regexdev.c
@@ -192,7 +192,7 @@ ree_dev_register(const char *name)
{
struct rte_regexdev *dev;
- cn9k_ree_dbg("Creating regexdev %s\n", name);
+ cn9k_ree_dbg("Creating regexdev %s", name);
/* allocate device structure */
dev = rte_regexdev_register(name);
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index f034bd59ba..2958368813 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -536,7 +536,7 @@ notify_relay(void *arg)
if (nfds < 0) {
if (errno == EINTR)
continue;
- DRV_LOG(ERR, "epoll_wait return fail\n");
+ DRV_LOG(ERR, "epoll_wait return fail");
return 1;
}
@@ -651,12 +651,12 @@ intr_relay(void *arg)
errno == EWOULDBLOCK ||
errno == EAGAIN)
continue;
- DRV_LOG(ERR, "Error reading from file descriptor %d: %s\n",
+ DRV_LOG(ERR, "Error reading from file descriptor %d: %s",
csc_event.data.fd,
strerror(errno));
goto out;
} else if (nbytes == 0) {
- DRV_LOG(ERR, "Read nothing from file descriptor %d\n",
+ DRV_LOG(ERR, "Read nothing from file descriptor %d",
csc_event.data.fd);
continue;
} else {
@@ -1500,7 +1500,7 @@ ifcvf_pci_get_device_type(struct rte_pci_device *pci_dev)
uint16_t device_id;
if (pci_device_id < 0x1000 || pci_device_id > 0x107f) {
- DRV_LOG(ERR, "Probe device is not a virtio device\n");
+ DRV_LOG(ERR, "Probe device is not a virtio device");
return -1;
}
@@ -1577,7 +1577,7 @@ ifcvf_blk_get_config(int vid, uint8_t *config, uint32_t size)
DRV_LOG(DEBUG, " sectors : %u", dev_cfg->geometry.sectors);
DRV_LOG(DEBUG, "num_queues: 0x%08x", dev_cfg->num_queues);
- DRV_LOG(DEBUG, "config: [%x] [%x] [%x] [%x] [%x] [%x] [%x] [%x]\n",
+ DRV_LOG(DEBUG, "config: [%x] [%x] [%x] [%x] [%x] [%x] [%x] [%x]",
config[0], config[1], config[2], config[3], config[4],
config[5], config[6], config[7]);
return 0;
diff --git a/drivers/vdpa/nfp/nfp_vdpa.c b/drivers/vdpa/nfp/nfp_vdpa.c
index cef80b5476..3e4247dbcb 100644
--- a/drivers/vdpa/nfp/nfp_vdpa.c
+++ b/drivers/vdpa/nfp/nfp_vdpa.c
@@ -127,7 +127,7 @@ nfp_vdpa_vfio_setup(struct nfp_vdpa_dev *device)
if (device->vfio_group_fd < 0)
goto container_destroy;
- DRV_VDPA_LOG(DEBUG, "container_fd=%d, group_fd=%d,\n",
+ DRV_VDPA_LOG(DEBUG, "container_fd=%d, group_fd=%d,",
device->vfio_container_fd, device->vfio_group_fd);
ret = rte_pci_map_device(pci_dev);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.530150931 +0800
+++ 0003-drivers-remove-redundant-newline-from-logs.patch 2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From f665790a5dbad7b645ff46f31d65e977324e7bfc Mon Sep 17 00:00:00 2001
+From 5b424bd34d8c972d428d03bc9952528d597e2040 Mon Sep 17 00:00:00 2001
@@ -4,0 +5 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
@@ -6 +7 @@
-Fix places where two newline characters may be logged.
+[ upstream commit f665790a5dbad7b645ff46f31d65e977324e7bfc ]
@@ -8 +9 @@
-Cc: stable@dpdk.org
+Fix places where two newline characters may be logged.
@@ -13 +14 @@
- drivers/baseband/acc/rte_acc100_pmd.c | 20 +-
+ drivers/baseband/acc/rte_acc100_pmd.c | 22 +-
@@ -15 +16 @@
- .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 16 +-
+ .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 14 +-
@@ -53 +54 @@
- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 44 ++--
+ drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 42 ++--
@@ -55 +56 @@
- drivers/crypto/dpaa_sec/dpaa_sec.c | 27 ++-
+ drivers/crypto/dpaa_sec/dpaa_sec.c | 24 +-
@@ -99,3 +99,0 @@
- drivers/net/cnxk/cn10k_ethdev.c | 2 +-
- drivers/net/cnxk/cn9k_ethdev.c | 2 +-
- drivers/net/cnxk/cnxk_eswitch_devargs.c | 2 +-
@@ -105,3 +102,0 @@
- drivers/net/cnxk/cnxk_rep.c | 8 +-
- drivers/net/cnxk/cnxk_rep.h | 2 +-
- drivers/net/cpfl/cpfl_ethdev.c | 2 +-
@@ -128,3 +123 @@
- drivers/net/gve/base/gve_adminq.c | 4 +-
- drivers/net/gve/gve_rx.c | 2 +-
- drivers/net/gve/gve_tx.c | 2 +-
+ drivers/net/gve/base/gve_adminq.c | 2 +-
@@ -139 +132 @@
- drivers/net/i40e/i40e_ethdev.c | 51 ++--
+ drivers/net/i40e/i40e_ethdev.c | 37 ++-
@@ -141 +134 @@
- drivers/net/i40e/i40e_rxtx.c | 42 ++--
+ drivers/net/i40e/i40e_rxtx.c | 24 +-
@@ -143 +136 @@
- drivers/net/iavf/iavf_rxtx.c | 16 +-
+ drivers/net/iavf/iavf_rxtx.c | 2 +-
@@ -146 +139 @@
- drivers/net/ice/ice_ethdev.c | 50 ++--
+ drivers/net/ice/ice_ethdev.c | 44 ++--
@@ -149 +142 @@
- drivers/net/ice/ice_rxtx.c | 18 +-
+ drivers/net/ice/ice_rxtx.c | 2 +-
@@ -168 +161 @@
- drivers/net/octeon_ep/otx_ep_ethdev.c | 82 +++----
+ drivers/net/octeon_ep/otx_ep_ethdev.c | 80 +++----
@@ -183 +175,0 @@
- drivers/net/virtio/virtio_user/vhost_vdpa.c | 2 +-
@@ -193 +185 @@
- 180 files changed, 1244 insertions(+), 1262 deletions(-)
+ 171 files changed, 1194 insertions(+), 1211 deletions(-)
@@ -196 +188 @@
-index ab69350080..5c91acab7e 100644
+index 292537e24d..9d028f0f48 100644
@@ -199 +191 @@
-@@ -229,7 +229,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
+@@ -230,7 +230,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
@@ -208 +200,10 @@
-@@ -2672,7 +2672,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -1229,7 +1229,7 @@ acc100_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
+ harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, ACC100_HARQ_ALIGN_COMP);
+
+ if ((harq_layout[harq_index].offset > 0) && harq_prun) {
+- rte_bbdev_log_debug("HARQ IN offset unexpected for now\n");
++ rte_bbdev_log_debug("HARQ IN offset unexpected for now");
+ fcw->hcin_size0 = harq_layout[harq_index].size0;
+ fcw->hcin_offset = harq_layout[harq_index].offset;
+ fcw->hcin_size1 = harq_in_length - harq_layout[harq_index].offset;
+@@ -2890,7 +2890,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -217 +218 @@
-@@ -2710,7 +2710,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -2928,7 +2928,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -226 +227 @@
-@@ -2726,7 +2726,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -2944,7 +2944,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -235,3 +236 @@
-@@ -3450,7 +3450,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
- }
- avail -= 1;
+@@ -3691,7 +3691,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
@@ -239,2 +238,4 @@
-- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d\n",
-+ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d",
+ if (i > 0)
+ same_op = cmp_ldpc_dec_op(&ops[i-1]);
+- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
++ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
@@ -244 +245 @@
-@@ -3566,7 +3566,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
+@@ -3808,7 +3808,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
@@ -253,3 +254,3 @@
-@@ -3643,7 +3643,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
- atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc,
- rte_memory_order_relaxed);
+@@ -3885,7 +3885,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
+ atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+ __ATOMIC_RELAXED);
@@ -262 +263 @@
-@@ -3739,7 +3739,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+@@ -3981,7 +3981,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -271,3 +272,3 @@
-@@ -3818,7 +3818,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
- atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc,
- rte_memory_order_relaxed);
+@@ -4060,7 +4060,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
+ atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+ __ATOMIC_RELAXED);
@@ -280 +281 @@
-@@ -4552,7 +4552,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -4797,7 +4797,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -290 +291 @@
-index 585dc49bd6..fad984ccc1 100644
+index 686e086a5c..88e1d03ebf 100644
@@ -365 +366 @@
-@@ -3304,7 +3304,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+@@ -3319,7 +3319,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -374 +375 @@
-@@ -3411,7 +3411,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
+@@ -3440,7 +3440,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
@@ -383 +384 @@
-@@ -3946,7 +3946,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
+@@ -3985,7 +3985,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
@@ -392 +393 @@
-@@ -4606,7 +4606,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -4650,7 +4650,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -401 +402 @@
-@@ -4976,7 +4976,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -5020,7 +5020,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -411 +412 @@
-index 9b253cde28..3e04e44ba2 100644
+index 6b0644ffc5..d60cd3a5c5 100644
@@ -414 +415 @@
-@@ -1997,10 +1997,10 @@ fpga_5gnr_mutex_acquisition(struct fpga_5gnr_queue *q)
+@@ -1498,14 +1498,14 @@ fpga_mutex_acquisition(struct fpga_queue *q)
@@ -417,5 +418,9 @@
- usleep(FPGA_5GNR_TIMEOUT_CHECK_INTERVAL);
-- rte_bbdev_log_debug("Acquiring Mutex for %x\n", q->ddr_mutex_uuid);
-+ rte_bbdev_log_debug("Acquiring Mutex for %x", q->ddr_mutex_uuid);
- fpga_5gnr_reg_write_32(q->d->mmio_base, FPGA_5GNR_FEC_MUTEX, mutex_ctrl);
- mutex_read = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_MUTEX);
+ usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
+- rte_bbdev_log_debug("Acquiring Mutex for %x\n",
++ rte_bbdev_log_debug("Acquiring Mutex for %x",
+ q->ddr_mutex_uuid);
+ fpga_reg_write_32(q->d->mmio_base,
+ FPGA_5GNR_FEC_MUTEX,
+ mutex_ctrl);
+ mutex_read = fpga_reg_read_32(q->d->mmio_base,
+ FPGA_5GNR_FEC_MUTEX);
@@ -427,7 +432,6 @@
-@@ -2038,7 +2038,7 @@ fpga_5gnr_harq_write_loopback(struct fpga_5gnr_queue *q,
- reg_32 = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
- if (reg_32 < harq_in_length) {
- left_length = reg_32;
-- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
-+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
- }
+@@ -1546,7 +1546,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
+ FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
+ if (reg_32 < harq_in_length) {
+ left_length = reg_32;
+- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
++ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
@@ -436,7 +440,7 @@
-@@ -2108,17 +2108,17 @@ fpga_5gnr_harq_read_loopback(struct fpga_5gnr_queue *q,
- reg = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
- if (reg < harq_in_length) {
- harq_in_length = reg;
-- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
-+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
- }
+ input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
+@@ -1609,18 +1609,18 @@ fpga_harq_read_loopback(struct fpga_queue *q,
+ FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
+ if (reg < harq_in_length) {
+ harq_in_length = reg;
+- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
++ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
@@ -448 +452,2 @@
- harq_output->buf_len - rte_pktmbuf_headroom(harq_output),
+ harq_output->buf_len -
+ rte_pktmbuf_headroom(harq_output),
@@ -450 +455,2 @@
- harq_in_length = harq_output->buf_len - rte_pktmbuf_headroom(harq_output);
+ harq_in_length = harq_output->buf_len -
+ rte_pktmbuf_headroom(harq_output);
@@ -457,6 +463,6 @@
-@@ -2142,7 +2142,7 @@ fpga_5gnr_harq_read_loopback(struct fpga_5gnr_queue *q,
- while (reg != 1) {
- reg = fpga_5gnr_reg_read_8(q->d->mmio_base, FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
- if (reg == FPGA_5GNR_DDR_OVERFLOW) {
-- rte_bbdev_log(ERR, "Read address is overflow!\n");
-+ rte_bbdev_log(ERR, "Read address is overflow!");
+@@ -1642,7 +1642,7 @@ fpga_harq_read_loopback(struct fpga_queue *q,
+ FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
+ if (reg == FPGA_DDR_OVERFLOW) {
+ rte_bbdev_log(ERR,
+- "Read address is overflow!\n");
++ "Read address is overflow!");
@@ -466,9 +471,0 @@
-@@ -3376,7 +3376,7 @@ int rte_fpga_5gnr_fec_configure(const char *dev_name, const struct rte_fpga_5gnr
- return -ENODEV;
- }
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(bbdev->device);
-- rte_bbdev_log(INFO, "Configure dev id %x\n", pci_dev->id.device_id);
-+ rte_bbdev_log(INFO, "Configure dev id %x", pci_dev->id.device_id);
- if (pci_dev->id.device_id == VC_5GNR_PF_DEVICE_ID)
- return vc_5gnr_configure(dev_name, conf);
- else if (pci_dev->id.device_id == AGX100_PF_DEVICE_ID)
@@ -498 +495 @@
-index 574743a9da..1f661dd801 100644
+index 8ddc7ff05f..a66dcd8962 100644
@@ -574 +571 @@
-index c155f4a2fd..097d6dca08 100644
+index 89f0f329c0..adb452fd3e 100644
@@ -577 +574 @@
-@@ -500,7 +500,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+@@ -499,7 +499,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
@@ -586 +583 @@
-@@ -515,7 +515,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+@@ -514,7 +514,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
@@ -595 +592 @@
-@@ -629,14 +629,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
+@@ -628,14 +628,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
@@ -613 +610 @@
-index e12fd62f34..6981679a2d 100644
+index 5966776a85..b90efeb651 100644
@@ -748 +745 @@
-index daf7684d8e..438ac72563 100644
+index 14aff233d5..35eb8b7628 100644
@@ -751 +748 @@
-@@ -1564,7 +1564,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
+@@ -1493,7 +1493,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
@@ -868 +865 @@
-index ac522f8235..d890fad681 100644
+index 9e5e614b3b..92401e04d0 100644
@@ -871 +868 @@
-@@ -908,7 +908,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
+@@ -906,7 +906,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
@@ -894 +891 @@
-index 9f3870a311..e24826bb5d 100644
+index e1cef7a670..c1b91ad92f 100644
@@ -897 +894 @@
-@@ -504,7 +504,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
+@@ -503,7 +503,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
@@ -920 +917 @@
-index 293b0c81a1..499f93e373 100644
+index 748d287bad..b02c9c7f38 100644
@@ -923 +920 @@
-@@ -186,7 +186,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
+@@ -171,7 +171,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
@@ -933 +930 @@
-index 095afbb9e6..83228fb2b6 100644
+index f8607b2852..d39af3c85e 100644
@@ -936 +933 @@
-@@ -342,7 +342,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
+@@ -317,7 +317,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
@@ -946 +943 @@
-index 87a3ac80b9..636f93604e 100644
+index b393be4cf6..2e6846312b 100644
@@ -968 +965 @@
-index e638c616d8..561836760c 100644
+index f6be84ceb5..105450774e 100644
@@ -971 +968,2 @@
-@@ -10,7 +10,7 @@
+@@ -9,7 +9,7 @@
+
@@ -973 +970,0 @@
- #define RTE_LOGTYPE_IDPF_COMMON idpf_common_logtype
@@ -980 +977 @@
-@@ -18,9 +18,6 @@ extern int idpf_common_logtype;
+@@ -17,9 +17,6 @@ extern int idpf_common_logtype;
@@ -1035 +1032 @@
-index ad44b0e01f..4bf9bac23e 100644
+index f95dd33375..21a110d22e 100644
@@ -1366 +1363 @@
-index bb19854b50..7353fd4957 100644
+index 7391360925..d52f937548 100644
@@ -1659 +1656 @@
-index 6ed7a8f41c..27cdbf5ed4 100644
+index b55258689b..1713600db7 100644
@@ -1799 +1796 @@
-index 0dcf971a15..8956f7750d 100644
+index 583ba3b523..acb40bdf77 100644
@@ -1830 +1827 @@
-index 0c800fc350..5088d8ded6 100644
+index b7ca3af5a4..6d42b92d8b 100644
@@ -1843 +1840 @@
-index ca99bc6f42..700e141667 100644
+index a5271d7227..c92fdb446d 100644
@@ -1846 +1843 @@
-@@ -227,7 +227,7 @@ cryptodev_ccp_create(const char *name,
+@@ -228,7 +228,7 @@ cryptodev_ccp_create(const char *name,
@@ -1856 +1853 @@
-index dbd36a8a54..32415e815e 100644
+index c2a807fa94..cf163e0208 100644
@@ -1859 +1856 @@
-@@ -1953,7 +1953,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+@@ -1952,7 +1952,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
@@ -1868 +1865 @@
-@@ -2037,7 +2037,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
+@@ -2036,7 +2036,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
@@ -1877 +1874 @@
-@@ -2114,7 +2114,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+@@ -2113,7 +2113,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
@@ -1887 +1884 @@
-index c1f7181d55..99b6359e52 100644
+index 6ae356ace0..b65bea3b3f 100644
@@ -1959 +1956 @@
-@@ -1447,7 +1447,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1446,7 +1446,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1968 +1965 @@
-@@ -1476,7 +1476,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1475,7 +1475,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1977 +1974 @@
-@@ -1494,7 +1494,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1493,7 +1493,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1986 +1983 @@
-@@ -1570,7 +1570,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
+@@ -1569,7 +1569,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
@@ -1995 +1992 @@
-@@ -1603,7 +1603,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
+@@ -1602,7 +1602,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
@@ -2004 +2001 @@
-@@ -1825,7 +1825,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
+@@ -1824,7 +1824,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
@@ -2013 +2010 @@
-@@ -1842,7 +1842,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
+@@ -1841,7 +1841,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
@@ -2022 +2019 @@
-@@ -1885,7 +1885,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1884,7 +1884,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2031 +2028 @@
-@@ -1938,7 +1938,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1937,7 +1937,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2040 +2037 @@
-@@ -1949,7 +1949,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1948,7 +1948,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2048,2 +2045,2 @@
- dpaa2_sec_dump(ops[num_rx], stdout);
-@@ -1967,7 +1967,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+ dpaa2_sec_dump(ops[num_rx]);
+@@ -1966,7 +1966,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2058,10 +2055 @@
-@@ -2017,7 +2017,7 @@ dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-
- if (qp_conf->nb_descriptors < (2 * FLE_POOL_CACHE_SIZE)) {
- DPAA2_SEC_ERR("Minimum supported nb_descriptors %d,"
-- " but given %d\n", (2 * FLE_POOL_CACHE_SIZE),
-+ " but given %d", (2 * FLE_POOL_CACHE_SIZE),
- qp_conf->nb_descriptors);
- return -EINVAL;
- }
-@@ -2544,7 +2544,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
+@@ -2555,7 +2555,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
@@ -2076 +2064 @@
-@@ -4254,7 +4254,7 @@ check_devargs_handler(const char *key, const char *value,
+@@ -4275,7 +4275,7 @@ check_devargs_handler(const char *key, const char *value,
@@ -2162 +2150 @@
-index 1ddad6944e..225bf950e9 100644
+index 906ea39047..131cd90c94 100644
@@ -2174 +2162 @@
-@@ -851,7 +851,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
+@@ -849,7 +849,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
@@ -2182,2 +2170,2 @@
- dpaa_sec_dump(ctx, qp, stdout);
-@@ -1946,7 +1946,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+ dpaa_sec_dump(ctx, qp);
+@@ -1944,7 +1944,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2192 +2180 @@
-@@ -2056,7 +2056,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -2054,7 +2054,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2201 +2189 @@
-@@ -2097,7 +2097,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -2095,7 +2095,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2210 +2198 @@
-@@ -2160,7 +2160,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+@@ -2158,7 +2158,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
@@ -2219 +2207 @@
-@@ -2466,7 +2466,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
+@@ -2459,7 +2459,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
@@ -2228 +2216,2 @@
-@@ -2517,9 +2517,8 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
+@@ -2508,7 +2508,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
+ for (i = 0; i < RTE_DPAA_MAX_RX_QUEUE; i++) {
@@ -2230,7 +2219,3 @@
- ret = qman_retire_fq(fq, NULL);
- if (ret != 0)
-- DPAA_SEC_ERR("Queue %d is not retired"
-- " err: %d\n", fq->fqid,
-- ret);
-+ DPAA_SEC_ERR("Queue %d is not retired err: %d",
-+ fq->fqid, ret);
+ if (qman_retire_fq(fq, NULL) != 0)
+- DPAA_SEC_DEBUG("Queue is not retired\n");
++ DPAA_SEC_DEBUG("Queue is not retired");
@@ -2240 +2225 @@
-@@ -3475,7 +3474,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
+@@ -3483,7 +3483,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
@@ -2249 +2234 @@
-@@ -3574,7 +3573,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
+@@ -3582,7 +3582,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
@@ -2258 +2243 @@
-@@ -3637,7 +3636,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
+@@ -3645,7 +3645,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
@@ -2267 +2252 @@
-@@ -3705,7 +3704,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
+@@ -3713,7 +3713,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
@@ -2277 +2262 @@
-index f8c85b6528..60dbaee4ec 100644
+index fb895a8bc6..82ac1fa1c4 100644
@@ -2280 +2265 @@
-@@ -30,7 +30,7 @@ extern int dpaa_logtype_sec;
+@@ -29,7 +29,7 @@ extern int dpaa_logtype_sec;
@@ -2284,2 +2269,2 @@
-- RTE_LOG_DP(level, DPAA_SEC, fmt, ## args)
-+ RTE_LOG_DP_LINE(level, DPAA_SEC, fmt, ## args)
+- RTE_LOG_DP(level, PMD, fmt, ## args)
++ RTE_LOG_DP_LINE(level, PMD, fmt, ## args)
@@ -2343 +2328 @@
-index be6dbe9b1b..d42acd913c 100644
+index 52722f94a0..252bcb3192 100644
@@ -2356 +2341 @@
-index ef4228bd38..f3633091a9 100644
+index 80de25c65b..8e74645e0a 100644
@@ -2359 +2344 @@
-@@ -113,7 +113,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -107,7 +107,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2368 +2353 @@
-@@ -136,7 +136,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -130,7 +130,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2377 +2362 @@
-@@ -171,7 +171,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -165,7 +165,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2386 +2371 @@
-@@ -198,7 +198,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -192,7 +192,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2395 +2380 @@
-@@ -211,7 +211,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -205,7 +205,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2404 +2389 @@
-@@ -223,11 +223,11 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -217,11 +217,11 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2418 +2403 @@
-@@ -243,7 +243,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -237,7 +237,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2427 +2412 @@
-@@ -258,7 +258,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -252,7 +252,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2436 +2421 @@
-@@ -389,7 +389,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -361,7 +361,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2445 +2430 @@
-@@ -725,7 +725,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -691,7 +691,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2454 +2439 @@
-@@ -761,7 +761,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -727,7 +727,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2463 +2448 @@
-@@ -782,7 +782,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -748,7 +748,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2472 +2457 @@
-@@ -1234,7 +1234,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+@@ -1200,7 +1200,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
@@ -2482 +2467 @@
-index a96779f059..65f0e5c568 100644
+index e64df1a462..a0b354bb83 100644
@@ -2504 +2489 @@
-index 6e2afde34f..3104e6d31e 100644
+index 4647d568de..aa2363ef15 100644
@@ -2916 +2901 @@
-index 491f5ecd5b..e43884e69b 100644
+index 2bf3060278..5d240a3de1 100644
@@ -2919 +2904 @@
-@@ -1531,7 +1531,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev)
+@@ -1520,7 +1520,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
@@ -2926,2 +2911,2 @@
- if (qat_pci_dev->qat_dev_gen == QAT_VQAT &&
- sub_id != ADF_VQAT_ASYM_PCI_SUBSYSTEM_ID) {
+ if (gen_dev_ops->cryptodev_ops == NULL) {
+ QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
@@ -2929 +2914 @@
-index eb267db424..50d687fd37 100644
+index 9f4f6c3d93..224cc0ab50 100644
@@ -2932 +2917 @@
-@@ -581,7 +581,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
+@@ -569,7 +569,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
@@ -2941 +2926 @@
-@@ -1180,7 +1180,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
+@@ -1073,7 +1073,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
@@ -2950 +2935 @@
-@@ -1805,7 +1805,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
+@@ -1676,7 +1676,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
@@ -2959 +2944 @@
-@@ -2657,10 +2657,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
+@@ -2480,10 +2480,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
@@ -3187 +3172 @@
-index 7cd6ebc1e0..bce4b4b277 100644
+index 4db3b0554c..8bc076f5d5 100644
@@ -3190 +3175 @@
-@@ -357,7 +357,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
+@@ -358,7 +358,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
@@ -3199 +3184 @@
-@@ -630,7 +630,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
+@@ -631,7 +631,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
@@ -3208 +3193 @@
-@@ -912,7 +912,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -913,7 +913,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -3231 +3216 @@
-index 81637d9420..60ac219559 100644
+index a78889a7ef..2ee78773bb 100644
@@ -3234 +3219 @@
-@@ -324,7 +324,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -323,7 +323,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3243 +3228 @@
-@@ -339,7 +339,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -338,7 +338,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3252 +3237 @@
-@@ -365,7 +365,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -364,7 +364,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3336 +3321 @@
-index f0a4998bdd..c43ab864ca 100644
+index 5044cb17ef..9dc5edb3fb 100644
@@ -3339 +3324 @@
-@@ -171,7 +171,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
+@@ -168,7 +168,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
@@ -3348 +3333 @@
-@@ -259,7 +259,7 @@ set_producer_coremask(const char *key __rte_unused,
+@@ -256,7 +256,7 @@ set_producer_coremask(const char *key __rte_unused,
@@ -3357 +3342 @@
-@@ -293,7 +293,7 @@ set_max_cq_depth(const char *key __rte_unused,
+@@ -290,7 +290,7 @@ set_max_cq_depth(const char *key __rte_unused,
@@ -3366 +3351 @@
-@@ -304,7 +304,7 @@ set_max_cq_depth(const char *key __rte_unused,
+@@ -301,7 +301,7 @@ set_max_cq_depth(const char *key __rte_unused,
@@ -3375 +3360 @@
-@@ -322,7 +322,7 @@ set_max_enq_depth(const char *key __rte_unused,
+@@ -319,7 +319,7 @@ set_max_enq_depth(const char *key __rte_unused,
@@ -3384 +3369 @@
-@@ -333,7 +333,7 @@ set_max_enq_depth(const char *key __rte_unused,
+@@ -330,7 +330,7 @@ set_max_enq_depth(const char *key __rte_unused,
@@ -3393 +3378 @@
-@@ -351,7 +351,7 @@ set_max_num_events(const char *key __rte_unused,
+@@ -348,7 +348,7 @@ set_max_num_events(const char *key __rte_unused,
@@ -3402 +3387 @@
-@@ -361,7 +361,7 @@ set_max_num_events(const char *key __rte_unused,
+@@ -358,7 +358,7 @@ set_max_num_events(const char *key __rte_unused,
@@ -3411 +3396 @@
-@@ -378,7 +378,7 @@ set_num_dir_credits(const char *key __rte_unused,
+@@ -375,7 +375,7 @@ set_num_dir_credits(const char *key __rte_unused,
@@ -3420 +3405 @@
-@@ -388,7 +388,7 @@ set_num_dir_credits(const char *key __rte_unused,
+@@ -385,7 +385,7 @@ set_num_dir_credits(const char *key __rte_unused,
@@ -3429 +3414 @@
-@@ -405,7 +405,7 @@ set_dev_id(const char *key __rte_unused,
+@@ -402,7 +402,7 @@ set_dev_id(const char *key __rte_unused,
@@ -3438 +3423 @@
-@@ -425,7 +425,7 @@ set_poll_interval(const char *key __rte_unused,
+@@ -422,7 +422,7 @@ set_poll_interval(const char *key __rte_unused,
@@ -3447 +3432 @@
-@@ -445,7 +445,7 @@ set_port_cos(const char *key __rte_unused,
+@@ -442,7 +442,7 @@ set_port_cos(const char *key __rte_unused,
@@ -3456 +3441 @@
-@@ -458,18 +458,18 @@ set_port_cos(const char *key __rte_unused,
+@@ -455,18 +455,18 @@ set_port_cos(const char *key __rte_unused,
@@ -3478 +3463 @@
-@@ -487,7 +487,7 @@ set_cos_bw(const char *key __rte_unused,
+@@ -484,7 +484,7 @@ set_cos_bw(const char *key __rte_unused,
@@ -3487 +3472 @@
-@@ -495,11 +495,11 @@ set_cos_bw(const char *key __rte_unused,
+@@ -492,11 +492,11 @@ set_cos_bw(const char *key __rte_unused,
@@ -3501 +3486 @@
-@@ -515,7 +515,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
+@@ -512,7 +512,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
@@ -3510 +3495 @@
-@@ -524,7 +524,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
+@@ -521,7 +521,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
@@ -3519 +3504 @@
-@@ -540,7 +540,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
+@@ -537,7 +537,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
@@ -3528 +3513 @@
-@@ -560,7 +560,7 @@ set_default_depth_thresh(const char *key __rte_unused,
+@@ -557,7 +557,7 @@ set_default_depth_thresh(const char *key __rte_unused,
@@ -3537 +3522 @@
-@@ -579,7 +579,7 @@ set_vector_opts_enab(const char *key __rte_unused,
+@@ -576,7 +576,7 @@ set_vector_opts_enab(const char *key __rte_unused,
@@ -3546 +3531 @@
-@@ -599,7 +599,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
+@@ -596,7 +596,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
@@ -3555 +3540 @@
-@@ -619,7 +619,7 @@ set_enable_cq_weight(const char *key __rte_unused,
+@@ -616,7 +616,7 @@ set_enable_cq_weight(const char *key __rte_unused,
@@ -3564 +3549 @@
-@@ -640,7 +640,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
+@@ -637,7 +637,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
@@ -3573 +3558 @@
-@@ -657,18 +657,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
+@@ -654,18 +654,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
@@ -3595 +3580 @@
-@@ -688,7 +688,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+@@ -685,7 +685,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
@@ -3604 +3589 @@
-@@ -705,18 +705,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+@@ -702,18 +702,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
@@ -3626 +3611 @@
-@@ -738,7 +738,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
+@@ -735,7 +735,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
@@ -3635 +3620 @@
-@@ -781,7 +781,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -778,7 +778,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3644 +3629 @@
-@@ -809,7 +809,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -806,7 +806,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3653 +3638 @@
-@@ -854,7 +854,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -851,7 +851,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3662 +3647 @@
-@@ -930,27 +930,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
+@@ -927,27 +927,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
@@ -3694 +3679 @@
-@@ -1000,7 +1000,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
+@@ -997,7 +997,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
@@ -3703 +3688 @@
-@@ -1068,7 +1068,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
+@@ -1065,7 +1065,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
@@ -3712 +3697 @@
-@@ -1088,7 +1088,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
+@@ -1085,7 +1085,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
@@ -3721 +3706 @@
-@@ -1107,7 +1107,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
+@@ -1104,7 +1104,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
@@ -3730 +3715 @@
-@@ -1161,7 +1161,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
+@@ -1158,7 +1158,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
@@ -3739 +3724 @@
-@@ -1236,7 +1236,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
+@@ -1233,7 +1233,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
@@ -3748 +3733 @@
-@@ -1272,7 +1272,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
+@@ -1269,7 +1269,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
@@ -3757 +3742 @@
-@@ -1380,7 +1380,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
+@@ -1377,7 +1377,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
@@ -3766 +3751 @@
-@@ -1412,7 +1412,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
+@@ -1409,7 +1409,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
@@ -3775 +3760 @@
-@@ -1440,20 +1440,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
+@@ -1437,20 +1437,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
@@ -3799 +3784 @@
-@@ -1536,14 +1536,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1533,14 +1533,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3816 +3801 @@
-@@ -1579,7 +1579,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1576,7 +1576,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3825 +3810 @@
-@@ -1602,7 +1602,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1599,7 +1599,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3834 +3819 @@
-@@ -1615,7 +1615,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1612,7 +1612,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3843 +3828 @@
-@@ -1717,7 +1717,7 @@ error_exit:
+@@ -1714,7 +1714,7 @@ error_exit:
@@ -3852 +3837 @@
-@@ -1761,13 +1761,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1758,13 +1758,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3868 +3853 @@
-@@ -1802,7 +1802,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1799,7 +1799,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3877 +3862 @@
-@@ -1827,7 +1827,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1824,7 +1824,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3886 +3871 @@
-@@ -1916,7 +1916,7 @@ error_exit:
+@@ -1913,7 +1913,7 @@ error_exit:
@@ -3895 +3880 @@
-@@ -1932,7 +1932,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1929,7 +1929,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3904 +3889 @@
-@@ -1950,7 +1950,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1947,7 +1947,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3913 +3898 @@
-@@ -1982,7 +1982,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1979,7 +1979,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3922 +3907 @@
-@@ -2004,7 +2004,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -2001,7 +2001,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3931 +3916 @@
-@@ -2015,7 +2015,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -2012,7 +2012,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3940 +3925 @@
-@@ -2082,9 +2082,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
+@@ -2079,9 +2079,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
@@ -3952 +3937 @@
-@@ -2117,7 +2117,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
+@@ -2114,7 +2114,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
@@ -3961 +3946 @@
-@@ -2154,7 +2154,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
+@@ -2151,7 +2151,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
@@ -3970 +3955 @@
-@@ -2172,7 +2172,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
+@@ -2169,7 +2169,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
@@ -3979 +3964 @@
-@@ -2202,7 +2202,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
+@@ -2199,7 +2199,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
@@ -3988 +3973 @@
-@@ -2240,7 +2240,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2237,7 +2237,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -3997 +3982 @@
-@@ -2250,7 +2250,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2247,7 +2247,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4006 +3991 @@
-@@ -2258,7 +2258,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2255,7 +2255,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4015 +4000 @@
-@@ -2267,7 +2267,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2264,7 +2264,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4024 +4009 @@
-@@ -2289,14 +2289,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
+@@ -2286,14 +2286,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
@@ -4041 +4026 @@
-@@ -2381,7 +2381,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
+@@ -2378,7 +2378,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
@@ -4050 +4035 @@
-@@ -2434,7 +2434,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2431,7 +2431,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4059 +4044 @@
-@@ -2459,7 +2459,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2456,7 +2456,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4068 +4053 @@
-@@ -2477,7 +2477,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2474,7 +2474,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4077 +4062 @@
-@@ -2504,7 +2504,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
+@@ -2501,7 +2501,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
@@ -4086 +4071 @@
-@@ -2516,7 +2516,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
+@@ -2513,7 +2513,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
@@ -4095 +4080 @@
-@@ -2609,7 +2609,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
+@@ -2606,7 +2606,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
@@ -4104 +4089 @@
-@@ -2645,7 +2645,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
+@@ -2642,7 +2642,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
@@ -4113 +4098 @@
-@@ -2890,7 +2890,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
+@@ -2887,7 +2887,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
@@ -4115 +4100 @@
- DLB2_LOG_LINE_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED");
+ DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -4122 +4107 @@
-@@ -2909,7 +2909,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
+@@ -2906,7 +2906,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
@@ -4131 +4116 @@
-@@ -3156,7 +3156,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
+@@ -3153,7 +3153,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
@@ -4140 +4125 @@
-@@ -3213,7 +3213,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
+@@ -3210,7 +3210,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
@@ -4149 +4134 @@
-@@ -3367,7 +3367,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
+@@ -3364,7 +3364,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
@@ -4158 +4143 @@
-@@ -4283,7 +4283,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
+@@ -4278,7 +4278,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
@@ -4167 +4152 @@
-@@ -4303,7 +4303,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
+@@ -4298,7 +4298,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
@@ -4176 +4161 @@
-@@ -4394,7 +4394,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4389,7 +4389,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4185 +4170 @@
-@@ -4402,7 +4402,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4397,7 +4397,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4194 +4179 @@
-@@ -4420,7 +4420,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4415,7 +4415,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4203 +4188 @@
-@@ -4435,7 +4435,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4430,7 +4430,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4212 +4197 @@
-@@ -4454,7 +4454,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
+@@ -4449,7 +4449,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
@@ -4221 +4206 @@
-@@ -4610,7 +4610,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4605,7 +4605,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4230 +4215 @@
-@@ -4618,14 +4618,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4613,14 +4613,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4247 +4232 @@
-@@ -4648,7 +4648,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4643,7 +4643,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4256 +4241 @@
-@@ -4656,7 +4656,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4651,7 +4651,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4265 +4250 @@
-@@ -4664,7 +4664,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4659,7 +4659,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4274 +4259 @@
-@@ -4694,14 +4694,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4689,14 +4689,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
@@ -4292 +4277 @@
-index 22094f30bb..c037cfe786 100644
+index ff15271dda..28de48e24e 100644
@@ -4561 +4546 @@
-index b3576e5f42..ed4e6e424c 100644
+index 3d15250e11..019e90f7e7 100644
@@ -4653 +4638 @@
-index 1273455673..f0b2c7de99 100644
+index dd4e64395f..4658eaf3a2 100644
@@ -4674 +4659 @@
-@@ -851,7 +851,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
+@@ -849,7 +849,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
@@ -4683 +4668 @@
-@@ -885,7 +885,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
+@@ -883,7 +883,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
@@ -4692 +4677 @@
-@@ -905,7 +905,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
+@@ -903,7 +903,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
@@ -4701 +4686 @@
-@@ -928,7 +928,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
+@@ -926,7 +926,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
@@ -4710 +4695 @@
-@@ -1161,7 +1161,7 @@ dpaa2_eventdev_destroy(const char *name)
+@@ -1159,7 +1159,7 @@ dpaa2_eventdev_destroy(const char *name)
@@ -4733 +4718 @@
-index b34a5fcacd..25853166bf 100644
+index 0cccaf7e97..fe0c0ede6f 100644
@@ -4844 +4829 @@
-@@ -662,7 +662,7 @@ opdl_probe(struct rte_vdev_device *vdev)
+@@ -659,7 +659,7 @@ opdl_probe(struct rte_vdev_device *vdev)
@@ -4853 +4838 @@
-@@ -709,7 +709,7 @@ opdl_probe(struct rte_vdev_device *vdev)
+@@ -706,7 +706,7 @@ opdl_probe(struct rte_vdev_device *vdev)
@@ -4862 +4847 @@
-@@ -753,7 +753,7 @@ opdl_remove(struct rte_vdev_device *vdev)
+@@ -750,7 +750,7 @@ opdl_remove(struct rte_vdev_device *vdev)
@@ -5367 +5352 @@
-index 19a52afc7d..7913bc547e 100644
+index 2096496917..babe77a20f 100644
@@ -5415 +5400 @@
-@@ -772,7 +772,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -769,7 +769,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5424 +5409 @@
-@@ -780,7 +780,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -777,7 +777,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5433 +5418 @@
-@@ -788,7 +788,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -785,7 +785,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5442 +5427 @@
-@@ -1000,7 +1000,7 @@ sw_probe(struct rte_vdev_device *vdev)
+@@ -997,7 +997,7 @@ sw_probe(struct rte_vdev_device *vdev)
@@ -5451 +5436 @@
-@@ -1070,7 +1070,7 @@ sw_probe(struct rte_vdev_device *vdev)
+@@ -1067,7 +1067,7 @@ sw_probe(struct rte_vdev_device *vdev)
@@ -5460 +5445 @@
-@@ -1134,7 +1134,7 @@ sw_remove(struct rte_vdev_device *vdev)
+@@ -1131,7 +1131,7 @@ sw_remove(struct rte_vdev_device *vdev)
@@ -5492 +5477 @@
-index 42e17d984c..886fb7fbb0 100644
+index 84371d5d1a..b0c6d153e4 100644
@@ -5495 +5480 @@
-@@ -69,7 +69,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
+@@ -67,7 +67,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
@@ -5504 +5489 @@
-@@ -213,7 +213,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
+@@ -198,7 +198,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
@@ -5513 +5498 @@
-@@ -357,7 +357,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
+@@ -342,7 +342,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
@@ -5522 +5507 @@
-@@ -472,7 +472,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
+@@ -457,7 +457,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
@@ -5954 +5939 @@
-index 17b7b5c543..5448a5f3d7 100644
+index 6ce87f83f4..da45ebf45f 100644
@@ -5957 +5942 @@
-@@ -1353,7 +1353,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
+@@ -1352,7 +1352,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
@@ -6000 +5985 @@
-index cdedf67c6f..209cf5a80c 100644
+index 06c21ebe6d..3cca8a07f3 100644
@@ -6087,39 +6071,0 @@
-diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
-index 55ed54bb0f..ad6bc1ec21 100644
---- a/drivers/net/cnxk/cn10k_ethdev.c
-+++ b/drivers/net/cnxk/cn10k_ethdev.c
-@@ -707,7 +707,7 @@ cn10k_rx_descriptor_dump(const struct rte_eth_dev *eth_dev, uint16_t qid,
- available_pkts = cn10k_nix_rx_avail_get(rxq);
-
- if ((offset + num - 1) >= available_pkts) {
-- plt_err("Invalid BD num=%u\n", num);
-+ plt_err("Invalid BD num=%u", num);
- return -EINVAL;
- }
-
-diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
-index ea92b1dcb6..84c88655f8 100644
---- a/drivers/net/cnxk/cn9k_ethdev.c
-+++ b/drivers/net/cnxk/cn9k_ethdev.c
-@@ -708,7 +708,7 @@ cn9k_rx_descriptor_dump(const struct rte_eth_dev *eth_dev, uint16_t qid,
- available_pkts = cn9k_nix_rx_avail_get(rxq);
-
- if ((offset + num - 1) >= available_pkts) {
-- plt_err("Invalid BD num=%u\n", num);
-+ plt_err("Invalid BD num=%u", num);
- return -EINVAL;
- }
-
-diff --git a/drivers/net/cnxk/cnxk_eswitch_devargs.c b/drivers/net/cnxk/cnxk_eswitch_devargs.c
-index 8167ce673a..655813c71a 100644
---- a/drivers/net/cnxk/cnxk_eswitch_devargs.c
-+++ b/drivers/net/cnxk/cnxk_eswitch_devargs.c
-@@ -26,7 +26,7 @@ populate_repr_hw_info(struct cnxk_eswitch_dev *eswitch_dev, struct rte_eth_devar
-
- if (eth_da->type != RTE_ETH_REPRESENTOR_VF && eth_da->type != RTE_ETH_REPRESENTOR_PF &&
- eth_da->type != RTE_ETH_REPRESENTOR_SF) {
-- plt_err("unsupported representor type %d\n", eth_da->type);
-+ plt_err("unsupported representor type %d", eth_da->type);
- return -ENOTSUP;
- }
-
@@ -6127 +6073 @@
-index 38746c81c5..33bac55704 100644
+index c841b31051..60baf806ab 100644
@@ -6130 +6076 @@
-@@ -589,7 +589,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
+@@ -582,7 +582,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
@@ -6139 +6085 @@
-@@ -617,7 +617,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
+@@ -610,7 +610,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
@@ -6178 +6124 @@
-index b1093dd584..5b0948e07a 100644
+index c8f4848f92..89e00f8fc7 100644
@@ -6181 +6127 @@
-@@ -532,7 +532,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
+@@ -528,7 +528,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
@@ -6190,66 +6135,0 @@
-diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c
-index ca0637bde5..652d419ad8 100644
---- a/drivers/net/cnxk/cnxk_rep.c
-+++ b/drivers/net/cnxk/cnxk_rep.c
-@@ -270,7 +270,7 @@ cnxk_representee_mtu_msg_process(struct cnxk_eswitch_dev *eswitch_dev, uint16_t
-
- rep_dev = cnxk_rep_pmd_priv(rep_eth_dev);
- if (rep_dev->rep_id == rep_id) {
-- plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d\n", mtu, hw_func,
-+ plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d", mtu, hw_func,
- rep_id);
- rep_dev->repte_mtu = mtu;
- break;
-@@ -423,7 +423,7 @@ cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev)
- plt_err("Failed to alloc switch domain: %d", rc);
- goto fail;
- }
-- plt_rep_dbg("Allocated switch domain id %d for pf %d\n", switch_domain_id, pf);
-+ plt_rep_dbg("Allocated switch domain id %d for pf %d", switch_domain_id, pf);
- eswitch_dev->sw_dom[j].switch_domain_id = switch_domain_id;
- eswitch_dev->sw_dom[j].pf = pf;
- prev_pf = pf;
-@@ -549,7 +549,7 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi
- int i, j, rc;
-
- if (eswitch_dev->repr_cnt.nb_repr_created > RTE_MAX_ETHPORTS) {
-- plt_err("nb_representor_ports %d > %d MAX ETHPORTS\n",
-+ plt_err("nb_representor_ports %d > %d MAX ETHPORTS",
- eswitch_dev->repr_cnt.nb_repr_created, RTE_MAX_ETHPORTS);
- rc = -EINVAL;
- goto fail;
-@@ -604,7 +604,7 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi
- name, cnxk_representee_msg_thread_main,
- eswitch_dev);
- if (rc != 0) {
-- plt_err("Failed to create thread for VF mbox handling\n");
-+ plt_err("Failed to create thread for VF mbox handling");
- goto thread_fail;
- }
- }
-diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h
-index ad89649702..aaae2d4e8f 100644
---- a/drivers/net/cnxk/cnxk_rep.h
-+++ b/drivers/net/cnxk/cnxk_rep.h
-@@ -93,7 +93,7 @@ cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev)
- static __rte_always_inline void
- cnxk_rep_pool_buffer_stats(struct rte_mempool *pool)
- {
-- plt_rep_dbg(" pool %s size %d buffer count in use %d available %d\n", pool->name,
-+ plt_rep_dbg(" pool %s size %d buffer count in use %d available %d", pool->name,
- pool->size, rte_mempool_in_use_count(pool), rte_mempool_avail_count(pool));
- }
-
-diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
-index 222e178949..6f6707a0bd 100644
---- a/drivers/net/cpfl/cpfl_ethdev.c
-+++ b/drivers/net/cpfl/cpfl_ethdev.c
-@@ -2284,7 +2284,7 @@ get_running_host_id(void)
- uint8_t host_id = CPFL_INVALID_HOST_ID;
-
- if (uname(&unamedata) != 0)
-- PMD_INIT_LOG(ERR, "Cannot fetch node_name for host\n");
-+ PMD_INIT_LOG(ERR, "Cannot fetch node_name for host");
- else if (strstr(unamedata.nodename, "ipu-imc"))
- PMD_INIT_LOG(ERR, "CPFL PMD cannot be running on IMC.");
- else if (strstr(unamedata.nodename, "ipu-acc"))
@@ -6310 +6190 @@
-index 449bbda7ca..88374ea905 100644
+index 8e610b6bba..c5b1f161fd 100644
@@ -6331 +6211 @@
-@@ -1934,7 +1934,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
+@@ -1933,7 +1933,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
@@ -6340 +6220 @@
-@@ -2308,7 +2308,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
+@@ -2307,7 +2307,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
@@ -6349 +6229 @@
-@@ -2424,7 +2424,7 @@ rte_pmd_dpaa2_thread_init(void)
+@@ -2423,7 +2423,7 @@ rte_pmd_dpaa2_thread_init(void)
@@ -6358 +6238 @@
-@@ -2839,7 +2839,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
+@@ -2838,7 +2838,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
@@ -6367 +6247 @@
-@@ -2847,7 +2847,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
+@@ -2846,7 +2846,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
@@ -6376 +6256 @@
-@@ -2930,7 +2930,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
+@@ -2929,7 +2929,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
@@ -6386 +6266 @@
-index 6c7bac4d48..62e350d736 100644
+index eec7e60650..e590f6f748 100644
@@ -6398 +6278 @@
-@@ -3602,7 +3602,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3601,7 +3601,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6407 +6287 @@
-@@ -3720,14 +3720,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3718,14 +3718,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6424 +6304 @@
-@@ -3749,7 +3749,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3747,7 +3747,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6433 +6313 @@
-@@ -3774,7 +3774,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3772,7 +3772,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6442 +6322 @@
-@@ -3843,20 +3843,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
+@@ -3841,20 +3841,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
@@ -6467 +6347 @@
-@@ -3935,7 +3935,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3933,7 +3933,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6476 +6356 @@
-@@ -3947,7 +3947,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3945,7 +3945,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6485 +6365 @@
-@@ -3957,7 +3957,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3955,7 +3955,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6494 +6374 @@
-@@ -3967,7 +3967,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3965,7 +3965,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6503 +6383 @@
-@@ -4014,13 +4014,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
+@@ -4012,13 +4012,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
@@ -6519 +6399 @@
-@@ -4031,13 +4031,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
+@@ -4029,13 +4029,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
@@ -6656 +6536 @@
-index 36a14526a5..59f7a172c6 100644
+index 63463c4fbf..eb649fb063 100644
@@ -6696 +6576 @@
-index cb854964b4..97d65e7181 100644
+index 8fe5bfa013..3c0f282ec3 100644
@@ -6774 +6654 @@
-index 095be27b08..1e0a483d4a 100644
+index 8858f975f8..d64a1aedd3 100644
@@ -6777 +6657 @@
-@@ -5116,7 +5116,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
+@@ -5053,7 +5053,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
@@ -6787 +6667 @@
-index d02ee206f1..ffbecc407c 100644
+index c9352f0746..d8c30ef150 100644
@@ -6790 +6670 @@
-@@ -151,7 +151,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+@@ -150,7 +150,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
@@ -6799 +6679 @@
-@@ -198,7 +198,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
+@@ -197,7 +197,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
@@ -6880 +6760 @@
-index cad8db2f6f..c1dba0c0fd 100644
+index b04b6c9aa1..1121874346 100644
@@ -6883 +6763 @@
-@@ -672,7 +672,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
+@@ -670,7 +670,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
@@ -6892 +6772 @@
-@@ -695,7 +695,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -693,7 +693,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6901 +6781 @@
-@@ -703,7 +703,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -701,7 +701,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6910 +6790 @@
-@@ -716,7 +716,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -714,7 +714,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6919 +6799 @@
-@@ -982,7 +982,7 @@ static int udp_tunnel_common_check(struct enic *enic,
+@@ -980,7 +980,7 @@ static int udp_tunnel_common_check(struct enic *enic,
@@ -6928 +6808 @@
-@@ -995,10 +995,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
+@@ -993,10 +993,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
@@ -6941 +6821 @@
-@@ -1029,7 +1029,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
+@@ -1027,7 +1027,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
@@ -6950 +6830 @@
-@@ -1061,7 +1061,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
+@@ -1059,7 +1059,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
@@ -6959 +6839 @@
-@@ -1325,7 +1325,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -1323,7 +1323,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -7188 +7068 @@
-index 09c6bff026..bcb983e4a0 100644
+index 343bd13d67..438c0c5441 100644
@@ -7200,35 +7079,0 @@
-@@ -736,7 +736,7 @@ gve_set_max_desc_cnt(struct gve_priv *priv,
- {
- if (priv->queue_format == GVE_DQO_RDA_FORMAT) {
- PMD_DRV_LOG(DEBUG, "Overriding max ring size from device for DQ "
-- "queue format to 4096.\n");
-+ "queue format to 4096.");
- priv->max_rx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO;
- priv->max_tx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO;
- return;
-diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
-index 89b6ef384a..1f5fa3f1da 100644
---- a/drivers/net/gve/gve_rx.c
-+++ b/drivers/net/gve/gve_rx.c
-@@ -306,7 +306,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
-
- /* Ring size is required to be a power of two. */
- if (!rte_is_power_of_2(nb_desc)) {
-- PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.\n",
-+ PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.",
- nb_desc);
- return -EINVAL;
- }
-diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c
-index 658bfb972b..015ea9646b 100644
---- a/drivers/net/gve/gve_tx.c
-+++ b/drivers/net/gve/gve_tx.c
-@@ -561,7 +561,7 @@ gve_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc,
-
- /* Ring size is required to be a power of two. */
- if (!rte_is_power_of_2(nb_desc)) {
-- PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.\n",
-+ PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.",
- nb_desc);
- return -EINVAL;
- }
@@ -7398 +7243 @@
-index 26fa2eb951..f7162ee7bc 100644
+index 916bf30dcb..0b768ef140 100644
@@ -7491 +7336 @@
-index f847bf82bc..42f51c7621 100644
+index ffc1f6d874..2b043cd693 100644
@@ -7494 +7339 @@
-@@ -668,7 +668,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -653,7 +653,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -7503 +7348 @@
-@@ -1583,10 +1583,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
+@@ -1480,10 +1480,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
@@ -7515 +7360 @@
-@@ -2326,7 +2323,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2222,7 +2219,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7524 +7369 @@
-@@ -2336,7 +2333,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2232,7 +2229,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7533 +7378 @@
-@@ -2361,7 +2358,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2257,7 +2254,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7542 +7387 @@
-@@ -6959,7 +6956,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6814,7 +6811,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7551 +7396 @@
-@@ -6975,7 +6972,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6830,7 +6827,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7560 +7405 @@
-@@ -6985,13 +6982,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6840,13 +6837,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7576 +7421 @@
-@@ -7004,7 +7001,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6859,7 +6856,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7585 +7430 @@
-@@ -7014,7 +7011,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6869,7 +6866,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7594 +7439 @@
-@@ -11449,7 +11446,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11304,7 +11301,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7603 +7448 @@
-@@ -11460,7 +11457,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11315,7 +11312,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7612 +7457 @@
-@@ -11490,7 +11487,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11345,7 +11342,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7621 +7466 @@
-@@ -11526,7 +11523,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11381,7 +11378,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7630 +7475 @@
-@@ -11828,7 +11825,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
+@@ -11683,7 +11680,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
@@ -7639 +7484 @@
-@@ -11972,7 +11969,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
+@@ -11827,7 +11824,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
@@ -7648,63 +7492,0 @@
-@@ -12317,7 +12314,7 @@ i40e_fec_get_capability(struct rte_eth_dev *dev,
- if (hw->mac.type == I40E_MAC_X722 &&
- !(hw->flags & I40E_HW_FLAG_X722_FEC_REQUEST_CAPABLE)) {
- PMD_DRV_LOG(ERR, "Setting FEC encoding not supported by"
-- " firmware. Please update the NVM image.\n");
-+ " firmware. Please update the NVM image.");
- return -ENOTSUP;
- }
-
-@@ -12359,7 +12356,7 @@ i40e_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- /* Get link info */
- ret = i40e_aq_get_link_info(hw, enable_lse, &link_status, NULL);
- if (ret != I40E_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get link information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get link information: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -12369,7 +12366,7 @@ i40e_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- ret = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- NULL);
- if (ret) {
-- PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -12435,7 +12432,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- if (hw->mac.type == I40E_MAC_X722 &&
- !(hw->flags & I40E_HW_FLAG_X722_FEC_REQUEST_CAPABLE)) {
- PMD_DRV_LOG(ERR, "Setting FEC encoding not supported by"
-- " firmware. Please update the NVM image.\n");
-+ " firmware. Please update the NVM image.");
- return -ENOTSUP;
- }
-
-@@ -12507,7 +12504,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- NULL);
- if (status) {
-- PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
- status);
- return -ENOTSUP;
- }
-@@ -12524,7 +12521,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- config.fec_config = req_fec & I40E_AQ_PHY_FEC_CONFIG_MASK;
- status = i40e_aq_set_phy_config(hw, &config, NULL);
- if (status) {
-- PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d",
- status);
- return -ENOTSUP;
- }
-@@ -12532,7 +12529,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
-
- status = i40e_update_link_info(hw);
- if (status) {
-- PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d",
- status);
- return -ENOTSUP;
- }
@@ -7746 +7528 @@
-index ff977a3681..839c8a5442 100644
+index 5e693cb1ea..e65e8829d9 100644
@@ -7785,73 +7567 @@
-@@ -1564,7 +1564,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
-
- if ((adapter->mbuf_check & I40E_MBUF_CHECK_F_TX_MBUF) &&
- (rte_mbuf_check(mb, 1, &reason) != 0)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+ PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- pkt_error = true;
- break;
- }
-@@ -1573,7 +1573,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- (mb->data_len > mb->pkt_len ||
- mb->data_len < I40E_TX_MIN_PKT_LEN ||
- mb->data_len > adapter->max_pkt_len)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)",
- mb->data_len, I40E_TX_MIN_PKT_LEN, adapter->max_pkt_len);
- pkt_error = true;
- break;
-@@ -1586,13 +1586,13 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- * the limites.
- */
- if (mb->nb_segs > I40E_TX_MAX_MTU_SEG) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, I40E_TX_MAX_MTU_SEG);
- pkt_error = true;
- break;
- }
- if (mb->pkt_len > I40E_FRAME_SIZE_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, I40E_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -1606,18 +1606,18 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- /**
- * MSS outside the range are considered malicious
- */
-- PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- mb->tso_segsz, I40E_MIN_TSO_MSS, I40E_MAX_TSO_MSS);
- pkt_error = true;
- break;
- }
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- pkt_error = true;
- break;
- }
- if (mb->pkt_len > I40E_TSO_FRAME_SIZE_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, I40E_TSO_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -1627,13 +1627,13 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
-
- if (adapter->mbuf_check & I40E_MBUF_CHECK_F_TX_OFFLOAD) {
- if (ol_flags & I40E_TX_OFFLOAD_NOTSUP_MASK) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- pkt_error = true;
- break;
- }
-
- if (!rte_validate_tx_offload(mb)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- pkt_error = true;
- break;
- }
-@@ -3573,7 +3573,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+@@ -3467,7 +3467,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
@@ -7867 +7577 @@
-index 44276dcf38..c56fcfadf0 100644
+index 54bff05675..9087909ec2 100644
@@ -7870 +7580 @@
-@@ -2383,7 +2383,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
+@@ -2301,7 +2301,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
@@ -7879 +7589 @@
-@@ -2418,7 +2418,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
+@@ -2336,7 +2336,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
@@ -7888 +7598 @@
-@@ -3059,12 +3059,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
+@@ -2972,12 +2972,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
@@ -7903 +7613 @@
-@@ -3109,7 +3109,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
+@@ -3022,7 +3022,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
@@ -7912 +7622 @@
-@@ -3136,7 +3136,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
+@@ -3049,7 +3049,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
@@ -7922 +7632 @@
-index ecc31430d1..4850b9e381 100644
+index f19aa14646..ec0dffa30e 100644
@@ -7925 +7635 @@
-@@ -3036,7 +3036,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+@@ -3027,7 +3027,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
@@ -7934,58 +7643,0 @@
-@@ -3830,7 +3830,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
-
- if ((adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_MBUF) &&
- (rte_mbuf_check(mb, 1, &reason) != 0)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+ PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- pkt_error = true;
- break;
- }
-@@ -3838,7 +3838,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- if ((adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_SIZE) &&
- (mb->data_len < IAVF_TX_MIN_PKT_LEN ||
- mb->data_len > adapter->vf.max_pkt_len)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)",
- mb->data_len, IAVF_TX_MIN_PKT_LEN, adapter->vf.max_pkt_len);
- pkt_error = true;
- break;
-@@ -3848,7 +3848,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- /* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
- if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) {
- if (mb->nb_segs > IAVF_TX_MAX_MTU_SEG) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, IAVF_TX_MAX_MTU_SEG);
- pkt_error = true;
- break;
-@@ -3856,12 +3856,12 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- } else if ((mb->tso_segsz < IAVF_MIN_TSO_MSS) ||
- (mb->tso_segsz > IAVF_MAX_TSO_MSS)) {
- /* MSS outside the range are considered malicious */
-- PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- mb->tso_segsz, IAVF_MIN_TSO_MSS, IAVF_MAX_TSO_MSS);
- pkt_error = true;
- break;
- } else if (mb->nb_segs > txq->nb_tx_desc) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- pkt_error = true;
- break;
- }
-@@ -3869,13 +3869,13 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
-
- if (adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_OFFLOAD) {
- if (ol_flags & IAVF_TX_OFFLOAD_NOTSUP_MASK) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- pkt_error = true;
- break;
- }
-
- if (!rte_validate_tx_offload(mb)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- pkt_error = true;
- break;
- }
@@ -7993 +7645 @@
-index 8f3a385ca5..91f4943a11 100644
+index 5d845bba31..a025b0ea7f 100644
@@ -8005 +7657 @@
-@@ -2088,7 +2088,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
+@@ -2087,7 +2087,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
@@ -8082 +7734 @@
-index 304f959b7e..7b1bd163a2 100644
+index c1d2b91ad7..86f43050a5 100644
@@ -8085 +7737 @@
-@@ -1907,7 +1907,7 @@ no_dsn:
+@@ -1867,7 +1867,7 @@ no_dsn:
@@ -8094 +7746 @@
-@@ -1916,7 +1916,7 @@ load_fw:
+@@ -1876,7 +1876,7 @@ load_fw:
@@ -8103 +7755 @@
-@@ -2166,7 +2166,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
+@@ -2074,7 +2074,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
@@ -8112 +7764 @@
-@@ -2405,20 +2405,20 @@ ice_dev_init(struct rte_eth_dev *dev)
+@@ -2340,20 +2340,20 @@ ice_dev_init(struct rte_eth_dev *dev)
@@ -8136 +7788 @@
-@@ -2470,14 +2470,14 @@ ice_dev_init(struct rte_eth_dev *dev)
+@@ -2405,14 +2405,14 @@ ice_dev_init(struct rte_eth_dev *dev)
@@ -8147 +7799 @@
- ret = ice_lldp_fltr_add_remove(hw, vsi->vsi_id, true);
+ ret = ice_vsi_config_sw_lldp(vsi, true);
@@ -8154,3 +7806,3 @@
-@@ -2502,7 +2502,7 @@ ice_dev_init(struct rte_eth_dev *dev)
- if (hw->phy_model == ICE_PHY_E822) {
- ret = ice_start_phy_timer_e822(hw, hw->pf_id);
+@@ -2439,7 +2439,7 @@ ice_dev_init(struct rte_eth_dev *dev)
+ if (hw->phy_cfg == ICE_PHY_E822) {
+ ret = ice_start_phy_timer_e822(hw, hw->pf_id, true);
@@ -8163 +7815 @@
-@@ -2748,7 +2748,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
+@@ -2686,7 +2686,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
@@ -8172 +7824 @@
-@@ -2769,7 +2769,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
+@@ -2707,7 +2707,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
@@ -8181 +7833 @@
-@@ -3164,7 +3164,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
+@@ -3102,7 +3102,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
@@ -8190 +7842 @@
-@@ -3180,15 +3180,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
+@@ -3118,15 +3118,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
@@ -8209 +7861 @@
-@@ -3378,7 +3378,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
+@@ -3316,7 +3316,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
@@ -8218 +7870 @@
-@@ -3413,12 +3413,12 @@ static int ice_init_rss(struct ice_pf *pf)
+@@ -3351,12 +3351,12 @@ static int ice_init_rss(struct ice_pf *pf)
@@ -8233 +7885 @@
-@@ -4277,7 +4277,7 @@ ice_phy_conf_link(struct ice_hw *hw,
+@@ -4202,7 +4202,7 @@ ice_phy_conf_link(struct ice_hw *hw,
@@ -8242 +7894 @@
-@@ -5734,7 +5734,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
+@@ -5657,7 +5657,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
@@ -8251 +7903 @@
-@@ -5805,7 +5805,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
+@@ -5728,7 +5728,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
@@ -8260,27 +7911,0 @@
-@@ -6773,7 +6773,7 @@ ice_fec_get_capability(struct rte_eth_dev *dev, struct rte_eth_fec_capa *speed_f
- ret = ice_aq_get_phy_caps(hw->port_info, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
- &pcaps, NULL);
- if (ret != ICE_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get capability information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get capability information: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -6805,7 +6805,7 @@ ice_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
-
- ret = ice_get_link_info_safe(pf, enable_lse, &link_status);
- if (ret != ICE_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get link information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get link information: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -6815,7 +6815,7 @@ ice_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- ret = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
- &pcaps, NULL);
- if (ret != ICE_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get capability information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get capability information: %d",
- ret);
- return -ENOTSUP;
- }
@@ -8288 +7913 @@
-index edd8cc8f1a..741107f939 100644
+index 0b7920ad44..dd9130ace3 100644
@@ -8301 +7926 @@
-index b720e0f755..00d65bc637 100644
+index d8c46347d2..dad117679d 100644
@@ -8304 +7929 @@
-@@ -1245,13 +1245,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
+@@ -1242,13 +1242,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
@@ -8320 +7945 @@
-@@ -1259,7 +1259,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
+@@ -1256,7 +1256,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
@@ -8329 +7954 @@
-@@ -1381,7 +1381,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
+@@ -1378,7 +1378,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
@@ -8339 +7964 @@
-index f270498ed1..acd7539b5e 100644
+index dea6a5b535..7da314217a 100644
@@ -8342 +7967 @@
-@@ -2839,7 +2839,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
+@@ -2822,7 +2822,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
@@ -8351,66 +7975,0 @@
-@@ -3714,7 +3714,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-
- if ((adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_MBUF) &&
- (rte_mbuf_check(mb, 1, &reason) != 0)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+ PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- pkt_error = true;
- break;
- }
-@@ -3723,7 +3723,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- (mb->data_len > mb->pkt_len ||
- mb->data_len < ICE_TX_MIN_PKT_LEN ||
- mb->data_len > ICE_FRAME_SIZE_MAX)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %d)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %d)",
- mb->data_len, ICE_TX_MIN_PKT_LEN, ICE_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -3736,13 +3736,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- * the limites.
- */
- if (mb->nb_segs > ICE_TX_MTU_SEG_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, ICE_TX_MTU_SEG_MAX);
- pkt_error = true;
- break;
- }
- if (mb->pkt_len > ICE_FRAME_SIZE_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, ICE_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -3756,13 +3756,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- /**
- * MSS outside the range are considered malicious
- */
-- PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- mb->tso_segsz, ICE_MIN_TSO_MSS, ICE_MAX_TSO_MSS);
- pkt_error = true;
- break;
- }
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- pkt_error = true;
- break;
- }
-@@ -3771,13 +3771,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-
- if (adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_OFFLOAD) {
- if (ol_flags & ICE_TX_OFFLOAD_NOTSUP_MASK) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- pkt_error = true;
- break;
- }
-
- if (!rte_validate_tx_offload(mb)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- pkt_error = true;
- break;
- }
@@ -8692 +8251 @@
-index d88d4065f1..357307b2e0 100644
+index a44497ce51..3ac65ca3b3 100644
@@ -8695 +8254 @@
-@@ -1155,10 +1155,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+@@ -1154,10 +1154,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
@@ -8707 +8266 @@
-@@ -1783,7 +1780,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -1782,7 +1779,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -8825 +8384 @@
-index 91ba395ac3..e967fe5e48 100644
+index 0a0f639e39..002bc71c2a 100644
@@ -8828 +8387 @@
-@@ -173,14 +173,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
+@@ -171,14 +171,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
@@ -8845 +8404 @@
-@@ -193,7 +193,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
+@@ -191,7 +191,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
@@ -8854 +8413 @@
-@@ -424,7 +424,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
+@@ -422,7 +422,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
@@ -8863 +8422 @@
-@@ -630,7 +630,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -628,7 +628,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8872 +8431 @@
-@@ -679,7 +679,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -677,7 +677,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8881 +8440 @@
-@@ -713,7 +713,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -711,7 +711,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8890 +8449 @@
-@@ -769,7 +769,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -767,7 +767,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8899 +8458 @@
-@@ -806,7 +806,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -804,7 +804,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8944 +8503 @@
-index 16da22b5c6..e220ffaf92 100644
+index 18377d9caf..f05f4c24df 100644
@@ -8957 +8516 @@
-index c19db5c0eb..9c2872429f 100644
+index a1a7e93288..7c0ac6888b 100644
@@ -9001 +8560 @@
-index e8dda8d460..29944f5070 100644
+index 4dced0d328..68b0a8b8ab 100644
@@ -9004 +8563 @@
-@@ -1119,7 +1119,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
+@@ -1067,7 +1067,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
@@ -9014 +8573 @@
-index 23a452cacd..6c45ffaad3 100644
+index fb86e7b10d..4321924cb9 100644
@@ -9017,3 +8576,3 @@
-@@ -382,7 +382,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
- err = ngbe_flash_read_dword(hw, 0xFFFDC, &ssid);
- if (err) {
+@@ -381,7 +381,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+ ssid = ngbe_flash_read_dword(hw, 0xFFFDC);
+ if (ssid == 0x1) {
@@ -9076 +8635 @@
-index 45ea0b9c34..e84de5c1c7 100644
+index 9f11a2f317..8628edf8a7 100644
@@ -9079 +8638 @@
-@@ -170,7 +170,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
+@@ -139,7 +139,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
@@ -9089 +8648 @@
-index 39b28de2d0..d44ac211f1 100644
+index ef275703c3..74b63a161f 100644
@@ -9147 +8706 @@
-index 2aeebb4675..76f72c64c9 100644
+index 7f4edf8dcf..fdab542246 100644
@@ -9232 +8791 @@
-index 73eb0c9d31..7d5dd91a77 100644
+index 82e57520d3..938c51b35d 100644
@@ -9235 +8794 @@
-@@ -120,7 +120,7 @@ union otx_ep_instr_irh {
+@@ -119,7 +119,7 @@ union otx_ep_instr_irh {
@@ -9245 +8804 @@
-index 46211361a0..c4a5a67c79 100644
+index 615cbbb648..c0298a56ac 100644
@@ -9248 +8807 @@
-@@ -175,7 +175,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
+@@ -118,7 +118,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
@@ -9257 +8816 @@
-@@ -220,7 +220,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
+@@ -163,7 +163,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
@@ -9266 +8825 @@
-@@ -237,7 +237,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -180,7 +180,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9275 +8834 @@
-@@ -246,7 +246,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -189,7 +189,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9284 +8843 @@
-@@ -255,7 +255,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -198,7 +198,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9293 +8852 @@
-@@ -298,7 +298,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
+@@ -241,7 +241,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
@@ -9302 +8861 @@
-@@ -342,12 +342,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
+@@ -285,12 +285,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
@@ -9317 +8876 @@
-@@ -361,7 +361,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -304,7 +304,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9326 +8885 @@
-@@ -385,7 +385,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -328,7 +328,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9335 +8894 @@
-@@ -393,7 +393,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -336,7 +336,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9344 +8903 @@
-@@ -413,10 +413,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
+@@ -356,10 +356,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
@@ -9357 +8916 @@
-@@ -460,29 +460,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -403,29 +403,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9392 +8951 @@
-@@ -511,7 +511,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
+@@ -454,7 +454,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
@@ -9401 +8960 @@
-@@ -545,16 +545,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -488,16 +488,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9421 +8980 @@
-@@ -562,12 +562,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -505,12 +505,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9436 +8995 @@
-@@ -660,23 +660,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
+@@ -603,23 +603,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
@@ -9465 +9024 @@
-@@ -692,7 +692,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
+@@ -635,7 +635,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
@@ -9474 +9033 @@
-@@ -741,22 +741,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
+@@ -684,22 +684,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
@@ -9502,10 +9061 @@
-@@ -780,7 +780,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
-
- /* Parse devargs string */
- if (otx_ethdev_parse_devargs(eth_dev->device->devargs, otx_epvf)) {
-- otx_ep_err("Failed to parse devargs\n");
-+ otx_ep_err("Failed to parse devargs");
- return -EINVAL;
- }
-
-@@ -797,7 +797,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -734,7 +734,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9520 +9070 @@
-@@ -817,12 +817,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -754,12 +754,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9536 +9086 @@
-@@ -831,7 +831,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -768,7 +768,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9665 +9215 @@
-index ec32ab087e..9680a59797 100644
+index c421ef0a1c..65a1f304e8 100644
@@ -9668 +9218 @@
-@@ -23,19 +23,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
+@@ -22,19 +22,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
@@ -9691 +9241 @@
-@@ -47,7 +47,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+@@ -46,7 +46,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
@@ -9700 +9250 @@
-@@ -69,7 +69,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+@@ -68,7 +68,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
@@ -9709 +9259 @@
-@@ -95,7 +95,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -94,7 +94,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9718 +9268 @@
-@@ -103,7 +103,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -102,7 +102,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9727 +9277 @@
-@@ -118,7 +118,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -117,7 +117,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9736 +9286 @@
-@@ -126,7 +126,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -125,7 +125,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9745 +9295 @@
-@@ -134,14 +134,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -133,14 +133,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9762 +9312 @@
-@@ -187,12 +187,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
+@@ -185,12 +185,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
@@ -9777 +9327 @@
-@@ -235,7 +235,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+@@ -233,7 +233,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
@@ -9786 +9336 @@
-@@ -255,7 +255,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+@@ -253,7 +253,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
@@ -9795 +9345 @@
-@@ -270,7 +270,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
+@@ -268,7 +268,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
@@ -9804 +9354 @@
-@@ -324,7 +324,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+@@ -296,7 +296,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -9813 +9363 @@
-@@ -344,23 +344,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+@@ -316,23 +316,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -9841 +9391 @@
-@@ -396,17 +396,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
+@@ -366,17 +366,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
@@ -9862 +9412 @@
-@@ -431,12 +431,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
+@@ -401,12 +401,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
@@ -9877 +9427 @@
-@@ -599,7 +599,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
+@@ -568,7 +568,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
@@ -9886 +9436 @@
-@@ -675,16 +675,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
+@@ -644,16 +644,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
@@ -9910 +9460 @@
-@@ -757,7 +757,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -726,7 +726,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -9919 +9469 @@
-@@ -766,7 +766,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -735,7 +735,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -9928 +9478 @@
-@@ -834,7 +834,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -803,7 +803,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -10039 +9589 @@
-index 3c2154043c..c802b2c389 100644
+index 2a8378a33e..5f0cd1bb7f 100644
@@ -10079 +9629 @@
-index eccaaa2448..725ffcb2bc 100644
+index 0073dd7405..dc04a52639 100644
@@ -10103 +9653 @@
-@@ -583,16 +583,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+@@ -582,16 +582,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
@@ -10123 +9673 @@
-@@ -603,7 +603,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+@@ -602,7 +602,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
@@ -10132 +9682 @@
-@@ -993,24 +993,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
+@@ -992,24 +992,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
@@ -10227 +9777 @@
-index ede5fc83e3..25e28fd9f6 100644
+index c35585f5fd..dcc8cbe943 100644
@@ -10499 +10049 @@
-index 609d95dcfa..4441a90bdf 100644
+index ba2ef4058e..ee563c55ce 100644
@@ -10502 +10052 @@
-@@ -1814,7 +1814,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
+@@ -1817,7 +1817,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
@@ -10512 +10062 @@
-index 2fabb9fc4e..2834468764 100644
+index ad29c3cfec..a8bdc10232 100644
@@ -10515,3 +10065,3 @@
-@@ -613,7 +613,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
- err = txgbe_flash_read_dword(hw, 0xFFFDC, &ssid);
- if (err) {
+@@ -612,7 +612,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+ ssid = txgbe_flash_read_dword(hw, 0xFFFDC);
+ if (ssid == 0x1) {
@@ -10524 +10074 @@
-@@ -2762,7 +2762,7 @@ txgbe_dev_detect_sfp(void *param)
+@@ -2756,7 +2756,7 @@ txgbe_dev_detect_sfp(void *param)
@@ -10734,13 +10283,0 @@
-diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c
-index 3246b74e13..bc3e2a9af5 100644
---- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
-+++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
-@@ -670,7 +670,7 @@ vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
- notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
- data->vhostfd, i * page_size);
- if (notify_area[i] == MAP_FAILED) {
-- PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d\n",
-+ PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
- dev->path, i);
- i--;
- goto map_err;
@@ -10748 +10285 @@
-index 48b872524a..e8642be86b 100644
+index 1bfd6aba80..d93d443ec9 100644
@@ -10751 +10288 @@
-@@ -1149,7 +1149,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
+@@ -1088,7 +1088,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
@@ -10761 +10298 @@
-index 467fb61137..78fac63ab6 100644
+index 70ae9c6035..f98cdb6d58 100644
@@ -10764 +10301 @@
-@@ -1095,10 +1095,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
+@@ -1094,10 +1094,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
@@ -10870 +10407 @@
-index a972b3b7a4..113a22b0a7 100644
+index f89bd3f9e2..997fbf8a0d 100644
@@ -10916 +10453 @@
-@@ -661,7 +661,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
+@@ -660,7 +660,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
@@ -10925 +10462 @@
-@@ -816,13 +816,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
+@@ -815,13 +815,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
@@ -10941 +10478 @@
-@@ -846,14 +846,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -845,14 +845,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10959 +10496 @@
-@@ -864,7 +864,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -863,7 +863,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10968 +10505 @@
-@@ -880,7 +880,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -879,7 +879,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10977 +10514 @@
-@@ -923,7 +923,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
+@@ -922,7 +922,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
@@ -10986 +10523 @@
-@@ -954,7 +954,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
+@@ -953,7 +953,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
@@ -10995 +10532 @@
-@@ -1230,13 +1230,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
+@@ -1229,13 +1229,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
@@ -11011 +10548 @@
-@@ -1251,7 +1251,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
+@@ -1250,7 +1250,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
@@ -11020 +10557 @@
-@@ -1263,14 +1263,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
+@@ -1262,14 +1262,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
@@ -11037 +10574 @@
-@@ -1290,15 +1290,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
+@@ -1289,15 +1289,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
@@ -11058 +10595 @@
-@@ -1321,17 +1321,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
+@@ -1320,17 +1320,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
@@ -11082 +10619 @@
-@@ -1350,28 +1350,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
+@@ -1349,28 +1349,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
@@ -11122 +10659 @@
-@@ -1381,7 +1381,7 @@ fme_interrupt_handler(void *param)
+@@ -1380,7 +1380,7 @@ fme_interrupt_handler(void *param)
@@ -11131 +10668 @@
-@@ -1407,7 +1407,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
+@@ -1406,7 +1406,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
@@ -11140 +10677 @@
-@@ -1417,7 +1417,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
+@@ -1416,7 +1416,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
@@ -11149 +10686 @@
-@@ -1480,7 +1480,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1479,7 +1479,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -11158 +10695 @@
-@@ -1521,7 +1521,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1520,7 +1520,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'eal/x86: fix 32-bit write combining store' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (2 preceding siblings ...)
2024-11-11 6:26 ` patch 'drivers: remove redundant newline from logs' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix schedule type' " Xueming Li
` (116 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Radu Nicolau, Tyler Retzlaff, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f3f7310081a8db84e56dbeefa092c52874dedcc5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f3f7310081a8db84e56dbeefa092c52874dedcc5 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 14:27:57 +0100
Subject: [PATCH] eal/x86: fix 32-bit write combining store
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 41b09d64e35b877e8f29c4e5a8cf944e303695dd ]
The "movdiri" instruction is given as a series of bytes in rte_io.h so
that it works on compilers/assemblers which are unaware of the
instruction.
The REX prefix (0x40) on this instruction is invalid for 32-bit code,
causing issues.
Thankfully, the prefix is unnecessary in 64-bit code, since the data size
used is 32-bits.
Fixes: 8a00dfc738fe ("eal: add write combining store")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/x86/include/rte_io.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/eal/x86/include/rte_io.h b/lib/eal/x86/include/rte_io.h
index 0e1fefdee1..5366e09c47 100644
--- a/lib/eal/x86/include/rte_io.h
+++ b/lib/eal/x86/include/rte_io.h
@@ -24,7 +24,7 @@ __rte_x86_movdiri(uint32_t value, volatile void *addr)
{
asm volatile(
/* MOVDIRI */
- ".byte 0x40, 0x0f, 0x38, 0xf9, 0x02"
+ ".byte 0x0f, 0x38, 0xf9, 0x02"
:
: "a" (value), "d" (addr));
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.736952027 +0800
+++ 0004-eal-x86-fix-32-bit-write-combining-store.patch 2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From 41b09d64e35b877e8f29c4e5a8cf944e303695dd Mon Sep 17 00:00:00 2001
+From f3f7310081a8db84e56dbeefa092c52874dedcc5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 41b09d64e35b877e8f29c4e5a8cf944e303695dd ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'test/event: fix schedule type' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (3 preceding siblings ...)
2024-11-11 6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix target event queue' " Xueming Li
` (115 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=241ffcb0a722c7e556f3a6fa6de5bb89dccf6f51
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 241ffcb0a722c7e556f3a6fa6de5bb89dccf6f51 Mon Sep 17 00:00:00 2001
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date: Wed, 24 Jul 2024 01:02:12 +0530
Subject: [PATCH] test/event: fix schedule type
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit adadb5585bd50260c3fa5495fcbe8baf64386f7e ]
Missing schedule type assignment might set it to
incorrect value, set it to SCHED_TYPE_PARALLEL.
Fixes: d007a7f39de3 ("eventdev: introduce link profiles")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index e4e234dc98..9a6c8f470c 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1189,6 +1189,7 @@ test_eventdev_profile_switch(void)
ev.op = RTE_EVENT_OP_NEW;
ev.flow_id = 0;
ev.u64 = 0xBADF00D0;
+ ev.sched_type = RTE_SCHED_TYPE_PARALLEL;
rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
TEST_ASSERT(rc == 1, "Failed to enqueue event");
ev.queue_id = 1;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.773760326 +0800
+++ 0005-test-event-fix-schedule-type.patch 2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From adadb5585bd50260c3fa5495fcbe8baf64386f7e Mon Sep 17 00:00:00 2001
+From 241ffcb0a722c7e556f3a6fa6de5bb89dccf6f51 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit adadb5585bd50260c3fa5495fcbe8baf64386f7e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'test/event: fix target event queue' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (4 preceding siblings ...)
2024-11-11 6:26 ` patch 'test/event: fix schedule type' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
` (114 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: xuemingl, Amit Prakash Shukla, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=638e0139f654a52d374517d41797988ec47f63f3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 638e0139f654a52d374517d41797988ec47f63f3 Mon Sep 17 00:00:00 2001
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date: Thu, 22 Aug 2024 02:07:31 +0530
Subject: [PATCH] test/event: fix target event queue
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 367fa3504851ec6c4aef393a7c53638da45a903e ]
In OP_FWD mode, if internal port is supported, the target event queue
should be the TEST_APP_EV_QUEUE_ID.
Fixes: a276e7c8fbb3 ("test/event: add DMA adapter auto-test")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Tested-by: Amit Prakash Shukla <amitprakashs@marvell.com>
Acked-by: Amit Prakash Shukla <amitprakashs@marvell.com>
---
app/test/test_event_dma_adapter.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/app/test/test_event_dma_adapter.c b/app/test/test_event_dma_adapter.c
index 35b417b69f..de0d671d3f 100644
--- a/app/test/test_event_dma_adapter.c
+++ b/app/test/test_event_dma_adapter.c
@@ -276,7 +276,10 @@ test_op_forward_mode(void)
memset(&ev[i], 0, sizeof(struct rte_event));
ev[i].event = 0;
ev[i].event_type = RTE_EVENT_TYPE_DMADEV;
- ev[i].queue_id = TEST_DMA_EV_QUEUE_ID;
+ if (params.internal_port_op_fwd)
+ ev[i].queue_id = TEST_APP_EV_QUEUE_ID;
+ else
+ ev[i].queue_id = TEST_DMA_EV_QUEUE_ID;
ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC;
ev[i].flow_id = 0xAABB;
ev[i].event_ptr = op;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.808321426 +0800
+++ 0006-test-event-fix-target-event-queue.patch 2024-11-11 14:23:05.012192842 +0800
@@ -1 +1 @@
-From 367fa3504851ec6c4aef393a7c53638da45a903e Mon Sep 17 00:00:00 2001
+From 638e0139f654a52d374517d41797988ec47f63f3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 367fa3504851ec6c4aef393a7c53638da45a903e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 3b39521153..9988d4fc7b 100644
+index 35b417b69f..de0d671d3f 100644
@@ -23 +25,2 @@
-@@ -271,7 +271,10 @@ test_op_forward_mode(void)
+@@ -276,7 +276,10 @@ test_op_forward_mode(void)
+ memset(&ev[i], 0, sizeof(struct rte_event));
@@ -25 +27,0 @@
- ev[i].op = RTE_EVENT_OP_NEW;
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'examples/eventdev: fix queue crash with generic pipeline' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (5 preceding siblings ...)
2024-11-11 6:26 ` patch 'test/event: fix target event queue' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
` (113 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, Chenxingyu Wang, Pavan Nikhilesh, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=993e9b6fdf97d1ae2b30766c7b273c3e80de94a5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 993e9b6fdf97d1ae2b30766c7b273c3e80de94a5 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Wed, 18 Sep 2024 06:41:42 +0000
Subject: [PATCH] examples/eventdev: fix queue crash with generic pipeline
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f6f2307931c90d924405ea44b0b4be9d3d01bd17 ]
There was a segmentation fault when executing eventdev_pipeline with
command [1] with ConnectX-5 NIC card:
0x000000000079208c in rte_eth_tx_buffer (tx_pkt=0x16f8ed300, buffer=0x100,
queue_id=11, port_id=0) at
../lib/ethdev/rte_ethdev.h:6636
txa_service_tx (txa=0x17b19d080, ev=0xffffffffe500, n=4) at
../lib/eventdev/rte_event_eth_tx_adapter.c:631
0x0000000000792234 in txa_service_func (args=0x17b19d080) at
../lib/eventdev/rte_event_eth_tx_adapter.c:666
0x00000000008b0784 in service_runner_do_callback (s=0x17fffe100,
cs=0x17ffb5f80, service_idx=2) at
../lib/eal/common/rte_service.c:405
0x00000000008b0ad8 in service_run (i=2, cs=0x17ffb5f80,
service_mask=18446744073709551615, s=0x17fffe100,
serialize_mt_unsafe=0) at
../lib/eal/common/rte_service.c:441
0x00000000008b0c68 in rte_service_run_iter_on_app_lcore (id=2,
serialize_mt_unsafe=0) at
../lib/eal/common/rte_service.c:477
0x000000000057bcc4 in schedule_devices (lcore_id=0) at
../examples/eventdev_pipeline/pipeline_common.h:138
0x000000000057ca94 in worker_generic_burst (arg=0x17b131e80) at
../examples/eventdev_pipeline/
pipeline_worker_generic.c:83
0x00000000005794a8 in main (argc=11, argv=0xfffffffff470) at
../examples/eventdev_pipeline/main.c:449
The root cause is that the queue_id (11) is invalid, the queue_id comes
from mbuf.hash.txadapter.txq which may pre-write by NIC driver when
receiving packets (e.g. pre-write mbuf.hash.fdir.hi field).
Because this example only enabled one ethdev queue, so fixes it by reset
txq to zero in the first worker stage.
[1] dpdk-eventdev_pipeline -l 0-48 --vdev event_sw0 -- -r1 -t1 -e1 -w ff0
-s5 -n0 -c32 -W1000 -D
When launch eventdev_pipeline with command [1], event_sw
Fixes: 81fb40f95c82 ("examples/eventdev: add generic worker pipeline")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Chenxingyu Wang <wangchenxingyu@huawei.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
.mailmap | 1 +
examples/eventdev_pipeline/pipeline_worker_generic.c | 12 ++++++++----
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/.mailmap b/.mailmap
index f2883144f3..7aa2c27226 100644
--- a/.mailmap
+++ b/.mailmap
@@ -229,6 +229,7 @@ Cheng Peng <cheng.peng5@zte.com.cn>
Chengwen Feng <fengchengwen@huawei.com>
Chenmin Sun <chenmin.sun@intel.com>
Chenming Chang <ccm@ccm.ink>
+Chenxingyu Wang <wangchenxingyu@huawei.com>
Chenxu Di <chenxux.di@intel.com>
Chenyu Huang <chenyux.huang@intel.com>
Cheryl Houser <chouser@vmware.com>
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 783f68c91e..831d7fd53d 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -38,10 +38,12 @@ worker_generic(void *arg)
}
received++;
- /* The first worker stage does classification */
- if (ev.queue_id == cdata.qid[0])
+ /* The first worker stage does classification and sets txq. */
+ if (ev.queue_id == cdata.qid[0]) {
ev.flow_id = ev.mbuf->hash.rss
% cdata.num_fids;
+ rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
+ }
ev.queue_id = cdata.next_qid[ev.queue_id];
ev.op = RTE_EVENT_OP_FORWARD;
@@ -96,10 +98,12 @@ worker_generic_burst(void *arg)
for (i = 0; i < nb_rx; i++) {
- /* The first worker stage does classification */
- if (events[i].queue_id == cdata.qid[0])
+ /* The first worker stage does classification and sets txq. */
+ if (events[i].queue_id == cdata.qid[0]) {
events[i].flow_id = events[i].mbuf->hash.rss
% cdata.num_fids;
+ rte_event_eth_tx_adapter_txq_set(events[i].mbuf, 0);
+ }
events[i].queue_id = cdata.next_qid[events[i].queue_id];
events[i].op = RTE_EVENT_OP_FORWARD;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.843270025 +0800
+++ 0007-examples-eventdev-fix-queue-crash-with-generic-pipel.patch 2024-11-11 14:23:05.012192842 +0800
@@ -1 +1 @@
-From f6f2307931c90d924405ea44b0b4be9d3d01bd17 Mon Sep 17 00:00:00 2001
+From 993e9b6fdf97d1ae2b30766c7b273c3e80de94a5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f6f2307931c90d924405ea44b0b4be9d3d01bd17 ]
@@ -46 +48,0 @@
-Cc: stable@dpdk.org
@@ -57 +59 @@
-index 94fa73aa36..8a832ba4be 100644
+index f2883144f3..7aa2c27226 100644
@@ -60 +62 @@
-@@ -236,6 +236,7 @@ Cheng Peng <cheng.peng5@zte.com.cn>
+@@ -229,6 +229,7 @@ Cheng Peng <cheng.peng5@zte.com.cn>
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'crypto/dpaa2_sec: fix memory leak' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (6 preceding siblings ...)
2024-11-11 6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
` (112 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Gagandeep Singh; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=71c5928f9b78def6b0a13fd7e71f5505428b3dd6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 71c5928f9b78def6b0a13fd7e71f5505428b3dd6 Mon Sep 17 00:00:00 2001
From: Gagandeep Singh <g.singh@nxp.com>
Date: Tue, 6 Aug 2024 15:57:26 +0530
Subject: [PATCH] crypto/dpaa2_sec: fix memory leak
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9c0abd27c3fe7a8b842d6fc254ac1241f4ba8b65 ]
fixing memory leak while creating the PDCP session
with invalid data.
Fixes: bef594ec5cc8 ("crypto/dpaa2_sec: support PDCP offload")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b65bea3b3f..bd5590c02d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3466,6 +3466,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
}
} else {
DPAA2_SEC_ERR("Invalid crypto type");
+ rte_free(priv);
return -EINVAL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.876324924 +0800
+++ 0008-crypto-dpaa2_sec-fix-memory-leak.patch 2024-11-11 14:23:05.012192842 +0800
@@ -1 +1 @@
-From 9c0abd27c3fe7a8b842d6fc254ac1241f4ba8b65 Mon Sep 17 00:00:00 2001
+From 71c5928f9b78def6b0a13fd7e71f5505428b3dd6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9c0abd27c3fe7a8b842d6fc254ac1241f4ba8b65 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 2cdf9308f8..e4109e8f0a 100644
+index b65bea3b3f..bd5590c02d 100644
@@ -21 +23 @@
-@@ -3422,6 +3422,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
+@@ -3466,6 +3466,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (7 preceding siblings ...)
2024-11-11 6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
` (111 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Varun Sethi; +Cc: xuemingl, Gagandeep Singh, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5b8d264d559366660fcf5db9bf31db47f41708f0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5b8d264d559366660fcf5db9bf31db47f41708f0 Mon Sep 17 00:00:00 2001
From: Varun Sethi <v.sethi@nxp.com>
Date: Tue, 6 Aug 2024 15:57:27 +0530
Subject: [PATCH] common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2369bc1343fa5aac2890b2a3e12d65a2f1a2fd31 ]
Adding a Jump instruction with CALM flag to ensure
previous processing has been completed.
Fixes: 8827d94398f1 ("crypto/dpaa2_sec/hw: support AES-AES 18-bit PDCP")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Varun Sethi <v.sethi@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/common/dpaax/caamflib/desc/pdcp.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 0ed9eec816..27dd5c4347 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -1220,6 +1220,11 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ /* conditional jump with calm added to ensure that the
+ * previous processing has been completed
+ */
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1921,6 +1926,11 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+ /* conditional jump with calm added to ensure that the
+ * previous processing has been completed
+ */
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.912654224 +0800
+++ 0009-common-dpaax-caamflib-fix-PDCP-SNOW-ZUC-watchdog.patch 2024-11-11 14:23:05.022192842 +0800
@@ -1 +1 @@
-From 2369bc1343fa5aac2890b2a3e12d65a2f1a2fd31 Mon Sep 17 00:00:00 2001
+From 5b8d264d559366660fcf5db9bf31db47f41708f0 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2369bc1343fa5aac2890b2a3e12d65a2f1a2fd31 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index bc35114cf4..9ada3905c5 100644
+index 0ed9eec816..27dd5c4347 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'dev: fix callback lookup when unregistering device' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (8 preceding siblings ...)
2024-11-11 6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
` (110 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Malcolm Bumgardner; +Cc: xuemingl, Long Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=310058b8699eb2833fe0b8093a355f29fcea1b67
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 310058b8699eb2833fe0b8093a355f29fcea1b67 Mon Sep 17 00:00:00 2001
From: Malcolm Bumgardner <mbumgard@cisco.com>
Date: Thu, 18 Jul 2024 12:37:28 -0700
Subject: [PATCH] dev: fix callback lookup when unregistering device
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 66fd2cc2e47c69ee57f0fe32558e55b085c2e32d ]
In the device event unregister code, it unconditionally removes all
callbacks which are registered with device_name set to NULL.
This results in many callbacks incorrectly removed.
Fix this by only removing callbacks with matching cb_fn and cb_arg.
Fixes: a753e53d517b ("eal: add device event monitor framework")
Signed-off-by: Malcolm Bumgardner <mbumgard@cisco.com>
Signed-off-by: Long Li <longli@microsoft.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
.mailmap | 1 +
lib/eal/common/eal_common_dev.c | 13 +++++++------
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/.mailmap b/.mailmap
index 7aa2c27226..4b8a131d55 100644
--- a/.mailmap
+++ b/.mailmap
@@ -860,6 +860,7 @@ Mahesh Adulla <mahesh.adulla@amd.com>
Mahipal Challa <mchalla@marvell.com>
Mah Yock Gen <yock.gen.mah@intel.com>
Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
+Malcolm Bumgardner <mbumgard@cisco.com>
Mallesham Jatharakonda <mjatharakonda@oneconvergence.com>
Mallesh Koujalagi <malleshx.koujalagi@intel.com>
Malvika Gupta <malvika.gupta@arm.com>
diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c
index 614ef6c9fc..bc53b2e28d 100644
--- a/lib/eal/common/eal_common_dev.c
+++ b/lib/eal/common/eal_common_dev.c
@@ -550,16 +550,17 @@ rte_dev_event_callback_unregister(const char *device_name,
next = TAILQ_NEXT(event_cb, next);
if (device_name != NULL && event_cb->dev_name != NULL) {
- if (!strcmp(event_cb->dev_name, device_name)) {
- if (event_cb->cb_fn != cb_fn ||
- (cb_arg != (void *)-1 &&
- event_cb->cb_arg != cb_arg))
- continue;
- }
+ if (strcmp(event_cb->dev_name, device_name))
+ continue;
} else if (device_name != NULL) {
continue;
}
+ /* Remove only matching callback with arg */
+ if (event_cb->cb_fn != cb_fn ||
+ (cb_arg != (void *)-1 && event_cb->cb_arg != cb_arg))
+ continue;
+
/*
* if this callback is not executing right now,
* then remove it.
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.948838523 +0800
+++ 0010-dev-fix-callback-lookup-when-unregistering-device.patch 2024-11-11 14:23:05.022192842 +0800
@@ -1 +1 @@
-From 66fd2cc2e47c69ee57f0fe32558e55b085c2e32d Mon Sep 17 00:00:00 2001
+From 310058b8699eb2833fe0b8093a355f29fcea1b67 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 66fd2cc2e47c69ee57f0fe32558e55b085c2e32d ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index a66da3c8cb..8004772125 100644
+index 7aa2c27226..4b8a131d55 100644
@@ -27,2 +29,2 @@
-@@ -886,6 +886,7 @@ Mahipal Challa <mchalla@marvell.com>
- Mahmoud Maatuq <mahmoudmatook.mm@gmail.com>
+@@ -860,6 +860,7 @@ Mahesh Adulla <mahesh.adulla@amd.com>
+ Mahipal Challa <mchalla@marvell.com>
@@ -36 +38 @@
-index a99252b02f..70aa04dcd9 100644
+index 614ef6c9fc..bc53b2e28d 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'crypto/scheduler: fix session size computation' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (9 preceding siblings ...)
2024-11-11 6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
` (109 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Julien Hascoet; +Cc: xuemingl, Kai Ji, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a082f249748fac379ce9e996ba8bf9a555b89a10
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a082f249748fac379ce9e996ba8bf9a555b89a10 Mon Sep 17 00:00:00 2001
From: Julien Hascoet <ju.hascoet@gmail.com>
Date: Fri, 5 Jul 2024 14:57:56 +0200
Subject: [PATCH] crypto/scheduler: fix session size computation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b00bf84f0d3eb4c6a2944c918f697dc17cb3fce5 ]
The crypto scheduler session size computation was taking
into account only the worker session sizes and not its own.
Fixes: e2af4e403c15 ("crypto/scheduler: support DOCSIS security protocol")
Signed-off-by: Julien Hascoet <ju.hascoet@gmail.com>
Acked-by: Kai Ji <kai.ji@intel.com>
---
.mailmap | 1 +
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 4b8a131d55..8b9e849d05 100644
--- a/.mailmap
+++ b/.mailmap
@@ -711,6 +711,7 @@ Julien Aube <julien_dpdk@jaube.fr>
Julien Castets <jcastets@scaleway.com>
Julien Courtat <julien.courtat@6wind.com>
Julien Cretin <julien.cretin@trust-in-soft.com>
+Julien Hascoet <ju.hascoet@gmail.com>
Julien Massonneau <julien.massonneau@6wind.com>
Julien Meunier <julien.meunier@nokia.com> <julien.meunier@6wind.com>
Július Milan <jmilan.dev@gmail.com>
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index a18f7a08b0..6e43438469 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -185,7 +185,7 @@ scheduler_session_size_get(struct scheduler_ctx *sched_ctx,
uint8_t session_type)
{
uint8_t i = 0;
- uint32_t max_priv_sess_size = 0;
+ uint32_t max_priv_sess_size = sizeof(struct scheduler_session_ctx);
/* Check what is the maximum private session size for all workers */
for (i = 0; i < sched_ctx->nb_workers; i++) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.999628822 +0800
+++ 0011-crypto-scheduler-fix-session-size-computation.patch 2024-11-11 14:23:05.022192842 +0800
@@ -1 +1 @@
-From b00bf84f0d3eb4c6a2944c918f697dc17cb3fce5 Mon Sep 17 00:00:00 2001
+From a082f249748fac379ce9e996ba8bf9a555b89a10 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b00bf84f0d3eb4c6a2944c918f697dc17cb3fce5 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 8004772125..15d9b61029 100644
+index 4b8a131d55..8b9e849d05 100644
@@ -23 +25 @@
-@@ -734,6 +734,7 @@ Julien Aube <julien_dpdk@jaube.fr>
+@@ -711,6 +711,7 @@ Julien Aube <julien_dpdk@jaube.fr>
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (10 preceding siblings ...)
2024-11-11 6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
` (108 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Tejasree Kondoj; +Cc: xuemingl, Akhil Goyal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e3875312dbf74eeec02d8460ae4dd2f35bc2b464
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e3875312dbf74eeec02d8460ae4dd2f35bc2b464 Mon Sep 17 00:00:00 2001
From: Tejasree Kondoj <ktejasree@marvell.com>
Date: Fri, 13 Sep 2024 12:37:26 +0530
Subject: [PATCH] examples/ipsec-secgw: fix dequeue count from cryptodev
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 88948ff31f57618a74c8985c59e332676995b438 ]
Setting dequeue packet count to max of MAX_PKT_BURST
size instead of MAX_PKTS.
Dequeue from cryptodev is called with MAX_PKTS but
routing functions allocate hop/dst_ip arrays of
size MAX_PKT_BURST. This can corrupt stack causing
stack smashing error when more than MAX_PKT_BURST
packets are returned from cryptodev.
Fixes: a2b445b810ac ("examples/ipsec-secgw: allow larger burst size for vectors")
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
examples/ipsec-secgw/ipsec-secgw.c | 6 ++++--
examples/ipsec-secgw/ipsec_process.c | 3 ++-
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 761b9cf396..5e77d9d2ce 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -626,12 +626,13 @@ drain_inbound_crypto_queues(const struct lcore_conf *qconf,
uint32_t n;
struct ipsec_traffic trf;
unsigned int lcoreid = rte_lcore_id();
+ const int nb_pkts = RTE_DIM(trf.ipsec.pkts);
if (app_sa_prm.enable == 0) {
/* dequeue packets from crypto-queue */
n = ipsec_inbound_cqp_dequeue(ctx, trf.ipsec.pkts,
- RTE_DIM(trf.ipsec.pkts));
+ RTE_MIN(MAX_PKT_BURST, nb_pkts));
trf.ip4.num = 0;
trf.ip6.num = 0;
@@ -663,12 +664,13 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf,
{
uint32_t n;
struct ipsec_traffic trf;
+ const int nb_pkts = RTE_DIM(trf.ipsec.pkts);
if (app_sa_prm.enable == 0) {
/* dequeue packets from crypto-queue */
n = ipsec_outbound_cqp_dequeue(ctx, trf.ipsec.pkts,
- RTE_DIM(trf.ipsec.pkts));
+ RTE_MIN(MAX_PKT_BURST, nb_pkts));
trf.ip4.num = 0;
trf.ip6.num = 0;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index b0cece3ad1..1a64a4b49f 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -336,6 +336,7 @@ ipsec_cqp_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
struct rte_ipsec_session *ss;
struct traffic_type *out;
struct rte_ipsec_group *pg;
+ const int nb_cops = RTE_DIM(trf->ipsec.pkts);
struct rte_crypto_op *cop[RTE_DIM(trf->ipsec.pkts)];
struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)];
@@ -345,7 +346,7 @@ ipsec_cqp_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
out = &trf->ipsec;
/* dequeue completed crypto-ops */
- n = ctx_dequeue(ctx, cop, RTE_DIM(cop));
+ n = ctx_dequeue(ctx, cop, RTE_MIN(MAX_PKT_BURST, nb_cops));
if (n == 0)
return;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.037734621 +0800
+++ 0012-examples-ipsec-secgw-fix-dequeue-count-from-cryptode.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 88948ff31f57618a74c8985c59e332676995b438 Mon Sep 17 00:00:00 2001
+From e3875312dbf74eeec02d8460ae4dd2f35bc2b464 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 88948ff31f57618a74c8985c59e332676995b438 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index e98ad2572e..063cc8768e 100644
+index 761b9cf396..5e77d9d2ce 100644
@@ -60 +62 @@
-index ddbe30745b..5080e810e0 100644
+index b0cece3ad1..1a64a4b49f 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'bpf: fix free function mismatch if convert fails' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (11 preceding siblings ...)
2024-11-11 6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
` (107 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d4c099c6fcf9849630f3b7f930fc193d3ef54e6c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d4c099c6fcf9849630f3b7f930fc193d3ef54e6c Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:11 -0700
Subject: [PATCH] bpf: fix free function mismatch if convert fails
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a3923d6bd5c0b9838d8f4678233093ffad036193 ]
If conversion of cBF to eBPF fails then an object allocated with
rte_malloc() would be passed to free().
[908/3201] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o
../lib/bpf/bpf_convert.c: In function ‘rte_bpf_convert’:
../lib/bpf/bpf_convert.c:559:17:
warning: ‘free’ called on pointer returned from a mismatched
allocation function [-Wmismatched-dealloc]
559 | free(prm);
| ^~~~~~~~~
../lib/bpf/bpf_convert.c:545:15: note: returned from ‘rte_zmalloc’
545 | prm = rte_zmalloc("bpf_filter",
| ^~~~~~~~~~~~~~~~~~~~~~~~~
546 | sizeof(*prm) + ebpf_len * sizeof(*ebpf), 0);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 2eccf6afbea9 ("bpf: add function to convert classic BPF to DPDK BPF")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
lib/bpf/bpf_convert.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index d441be6663..cb400a4ffb 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -556,7 +556,7 @@ rte_bpf_convert(const struct bpf_program *prog)
ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len);
if (ret < 0) {
RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__);
- free(prm);
+ rte_free(prm);
rte_errno = -ret;
return NULL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.070564820 +0800
+++ 0013-bpf-fix-free-function-mismatch-if-convert-fails.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From a3923d6bd5c0b9838d8f4678233093ffad036193 Mon Sep 17 00:00:00 2001
+From d4c099c6fcf9849630f3b7f930fc193d3ef54e6c Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a3923d6bd5c0b9838d8f4678233093ffad036193 ]
@@ -26 +28,0 @@
-Cc: stable@dpdk.org
@@ -37 +39 @@
-index d7ff2b4325..e7e298c9cb 100644
+index d441be6663..cb400a4ffb 100644
@@ -43 +45 @@
- RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__);
+ RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__);
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'baseband/la12xx: fix use after free in modem config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (12 preceding siblings ...)
2024-11-11 6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
` (106 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Hemant Agrawal, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e90be36798669790e22e49aa3db399630e8a4f48
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e90be36798669790e22e49aa3db399630e8a4f48 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:19 -0700
Subject: [PATCH] baseband/la12xx: fix use after free in modem config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6ffb34498913f84713e98d6a2a21d2a86028a604 ]
The info pointer (hp) could get freed twice.
Fix by nulling after free.
In function 'setup_la12xx_dev',
inlined from 'la12xx_bbdev_create' at
../drivers/baseband/la12xx/bbdev_la12xx.c:1029:8,
inlined from 'la12xx_bbdev_probe' at
../drivers/baseband/la12xx/bbdev_la12xx.c:1075:9:
../drivers/baseband/la12xx/bbdev_la12xx.c:901:9:
error: pointer 'hp_info' may be used after 'rte_free'
[-Werror=use-after-free]
901 | rte_free(hp);
| ^~~~~~~~~~~~
../drivers/baseband/la12xx/bbdev_la12xx.c:791:17:
note: call to 'rte_free' here
791 | rte_free(hp);
| ^~~~~~~~~~~~
Fixes: 24d0ba22546e ("baseband/la12xx: add queue and modem config")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/baseband/la12xx/bbdev_la12xx.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c
index af4b4f1e9a..2432cdf884 100644
--- a/drivers/baseband/la12xx/bbdev_la12xx.c
+++ b/drivers/baseband/la12xx/bbdev_la12xx.c
@@ -789,6 +789,7 @@ setup_la12xx_dev(struct rte_bbdev *dev)
ipc_priv->hugepg_start.size = hp->len;
rte_free(hp);
+ hp = NULL;
}
dev_ipc = open_ipc_dev(priv->modem_id);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.104682320 +0800
+++ 0014-baseband-la12xx-fix-use-after-free-in-modem-config.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 6ffb34498913f84713e98d6a2a21d2a86028a604 Mon Sep 17 00:00:00 2001
+From e90be36798669790e22e49aa3db399630e8a4f48 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6ffb34498913f84713e98d6a2a21d2a86028a604 ]
@@ -28 +30,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/qat: fix use after free in device probe' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (13 preceding siblings ...)
2024-11-11 6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
` (105 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=67197a5768eb0b2579058cbf1862eba4c43537aa
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 67197a5768eb0b2579058cbf1862eba4c43537aa Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:17 -0700
Subject: [PATCH] common/qat: fix use after free in device probe
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1af60a8ce25a4a1a2ae1da6c00f432ce89a4c2eb ]
Checking return value of rte_memzone_free() is pointless
and if it failed then it was because the pointer was null.
Fixes: 7b1374b1e6e7 ("common/qat: limit configuration to primary process")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/common/qat/qat_device.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index eceb5c89c4..6901fb3aab 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -335,11 +335,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
return qat_dev;
error:
- if (rte_memzone_free(qat_dev_mz)) {
- QAT_LOG(DEBUG,
- "QAT internal error! Trying to free already allocated memzone: %s",
- qat_dev_mz->name);
- }
+ rte_memzone_free(qat_dev_mz);
return NULL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.137913819 +0800
+++ 0015-common-qat-fix-use-after-free-in-device-probe.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 1af60a8ce25a4a1a2ae1da6c00f432ce89a4c2eb Mon Sep 17 00:00:00 2001
+From 67197a5768eb0b2579058cbf1862eba4c43537aa Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1af60a8ce25a4a1a2ae1da6c00f432ce89a4c2eb ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 4a972a83bd..bca88fd9bd 100644
+index eceb5c89c4..6901fb3aab 100644
@@ -27 +29,2 @@
-@@ -390,11 +390,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev)
+@@ -335,11 +335,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
+
@@ -30 +32,0 @@
- rte_free(qat_dev->command_line);
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/idpf: fix use after free in mailbox init' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (14 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
` (104 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=91f32226a7208deb90b1594cfeb769399b315687
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 91f32226a7208deb90b1594cfeb769399b315687 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:20 -0700
Subject: [PATCH] common/idpf: fix use after free in mailbox init
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4baf54ed9dc87b89ea2150578c51120bc0157bb0 ]
The macro in this driver was redefining LIST_FOR_EACH_ENTRY_SAFE
as a simple LIST_FOR_EACH macro.
But they are not the same the _SAFE variant guarantees that
there will not be use after free.
Fixes: fb4ac04e9bfa ("common/idpf: introduce common library")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/common/idpf/base/idpf_osdep.h | 10 ++++++++--
drivers/common/idpf/idpf_common_device.c | 3 +--
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/common/idpf/base/idpf_osdep.h b/drivers/common/idpf/base/idpf_osdep.h
index 74a376cb13..581a36cc40 100644
--- a/drivers/common/idpf/base/idpf_osdep.h
+++ b/drivers/common/idpf/base/idpf_osdep.h
@@ -341,10 +341,16 @@ idpf_hweight32(u32 num)
#define LIST_ENTRY_TYPE(type) LIST_ENTRY(type)
#endif
+#ifndef LIST_FOREACH_SAFE
+#define LIST_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = LIST_FIRST((head)); \
+ (var) && ((tvar) = LIST_NEXT((var), field), 1); \
+ (var) = (tvar))
+#endif
+
#ifndef LIST_FOR_EACH_ENTRY_SAFE
#define LIST_FOR_EACH_ENTRY_SAFE(pos, temp, head, entry_type, list) \
- LIST_FOREACH(pos, head, list)
-
+ LIST_FOREACH_SAFE(pos, head, list, temp)
#endif
#ifndef LIST_FOR_EACH_ENTRY
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index cc4207a46e..77c58170b3 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -136,8 +136,7 @@ idpf_init_mbx(struct idpf_hw *hw)
if (ret != 0)
return ret;
- LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
- struct idpf_ctlq_info, cq_list) {
+ LIST_FOR_EACH_ENTRY(ctlq, &hw->cq_list_head, struct idpf_ctlq_info, cq_list) {
if (ctlq->q_id == IDPF_CTLQ_ID &&
ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
hw->asq = ctlq;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.178396818 +0800
+++ 0016-common-idpf-fix-use-after-free-in-mailbox-init.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 4baf54ed9dc87b89ea2150578c51120bc0157bb0 Mon Sep 17 00:00:00 2001
+From 91f32226a7208deb90b1594cfeb769399b315687 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4baf54ed9dc87b89ea2150578c51120bc0157bb0 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index e042ef871c..cf9e553906 100644
+index 74a376cb13..581a36cc40 100644
@@ -50 +52 @@
-index 8403ed83f9..e9fa024850 100644
+index cc4207a46e..77c58170b3 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'crypto/bcmfs: fix free function mismatch' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (15 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
` (103 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Ajit Khaparde, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fd941582eaf51694100db70a67551375476093d2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fd941582eaf51694100db70a67551375476093d2 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:06 -0700
Subject: [PATCH] crypto/bcmfs: fix free function mismatch
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b1703af8e77d9e872e2ead92ab2dbcf290686f78 ]
The device structure is allocated with rte_malloc() and
then incorrectly freed with free().
This will lead to corrupt malloc pool.
Bugzilla ID: 1552
Fixes: c8e79da7c676 ("crypto/bcmfs: introduce BCMFS driver")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index ada7ba342c..46522970d5 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -139,7 +139,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
return fsdev;
cleanup:
- free(fsdev);
+ rte_free(fsdev);
return NULL;
}
@@ -163,7 +163,7 @@ fsdev_release(struct bcmfs_device *fsdev)
return;
TAILQ_REMOVE(&fsdev_list, fsdev, next);
- free(fsdev);
+ rte_free(fsdev);
}
static int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.212454218 +0800
+++ 0017-crypto-bcmfs-fix-free-function-mismatch.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From b1703af8e77d9e872e2ead92ab2dbcf290686f78 Mon Sep 17 00:00:00 2001
+From fd941582eaf51694100db70a67551375476093d2 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b1703af8e77d9e872e2ead92ab2dbcf290686f78 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'dma/idxd: fix free function mismatch in device probe' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (16 preceding siblings ...)
2024-11-11 6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
` (102 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Bruce Richardson, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ce1390c98af10b10a62004f28fa5ed90121fd760
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ce1390c98af10b10a62004f28fa5ed90121fd760 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:07 -0700
Subject: [PATCH] dma/idxd: fix free function mismatch in device probe
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 91b026fb46d987e68c1152b0bb5f0bc8f1f274db ]
The data structure is allocated with rte_malloc and incorrectly
freed in cleanup logic using free.
Bugzilla ID: 1549
Fixes: 9449330a8458 ("dma/idxd: create dmadev instances on PCI probe")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/dma/idxd/idxd_pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index 2ee78773bb..c314aee65c 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -300,7 +300,7 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_dmadev *idxd,
return nb_wqs;
err:
- free(pci);
+ rte_free(pci);
return err_code;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.251908317 +0800
+++ 0018-dma-idxd-fix-free-function-mismatch-in-device-probe.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 91b026fb46d987e68c1152b0bb5f0bc8f1f274db Mon Sep 17 00:00:00 2001
+From ce1390c98af10b10a62004f28fa5ed90121fd760 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 91b026fb46d987e68c1152b0bb5f0bc8f1f274db ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 60ac219559..6ed03e96da 100644
+index 2ee78773bb..c314aee65c 100644
@@ -29 +31 @@
-@@ -301,7 +301,7 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_dmadev *idxd,
+@@ -300,7 +300,7 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_dmadev *idxd,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'event/cnxk: fix free function mismatch in port config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (17 preceding siblings ...)
2024-11-11 6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
` (101 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Pavan Nikhilesh, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a3762499271d5541b0741fbceb9ccbdd5c36f557
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a3762499271d5541b0741fbceb9ccbdd5c36f557 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:08 -0700
Subject: [PATCH] event/cnxk: fix free function mismatch in port config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit db92f4e2ce491bb96605621cdd6f6251ea3bde85 ]
The code to cleanup in case of error would dereference null pointer
then pass that result to rte_free.
Fixes: 97a05c1fe634 ("event/cnxk: add port config")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 20f7f0d6df..f44d8fb377 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -118,8 +118,8 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return 0;
hws_fini:
for (i = i - 1; i >= 0; i--) {
- event_dev->data->ports[i] = NULL;
rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ event_dev->data->ports[i] = NULL;
}
return -ENOMEM;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.288408016 +0800
+++ 0019-event-cnxk-fix-free-function-mismatch-in-port-config.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From db92f4e2ce491bb96605621cdd6f6251ea3bde85 Mon Sep 17 00:00:00 2001
+From a3762499271d5541b0741fbceb9ccbdd5c36f557 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit db92f4e2ce491bb96605621cdd6f6251ea3bde85 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index c1df481827..84a55511a3 100644
+index 20f7f0d6df..f44d8fb377 100644
@@ -28 +30 @@
-@@ -121,8 +121,8 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+@@ -118,8 +118,8 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cnxk: fix use after free in mempool create' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (18 preceding siblings ...)
2024-11-11 6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
` (100 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c033d1168d448b3e4bb2b991abaac255331ed056
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c033d1168d448b3e4bb2b991abaac255331ed056 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:10 -0700
Subject: [PATCH] net/cnxk: fix use after free in mempool create
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c024de17933128f37b1dfe38a0fae9975be1b104 ]
The driver would refer to the mempool object after it was freed.
Bugzilla ID: 1554
Fixes: 7ea187184a51 ("common/cnxk: support 1-N pool-aura per NIX LF")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/cnxk/cnxk_ethdev_sec.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index b02dac4952..2cb2050faf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -135,8 +135,8 @@ cnxk_nix_inl_custom_meta_pool_cb(uintptr_t pmpool, uintptr_t *mpool, const char
return -EINVAL;
}
- rte_mempool_free(hp);
plt_free(hp->pool_config);
+ rte_mempool_free(hp);
*aura_handle = 0;
*mpool = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.321414215 +0800
+++ 0020-net-cnxk-fix-use-after-free-in-mempool-create.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From c024de17933128f37b1dfe38a0fae9975be1b104 Mon Sep 17 00:00:00 2001
+From c033d1168d448b3e4bb2b991abaac255331ed056 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c024de17933128f37b1dfe38a0fae9975be1b104 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 6f5319e534..e428d2115d 100644
+index b02dac4952..2cb2050faf 100644
@@ -27 +29 @@
-@@ -136,8 +136,8 @@ cnxk_nix_inl_custom_meta_pool_cb(uintptr_t pmpool, uintptr_t *mpool, const char
+@@ -135,8 +135,8 @@ cnxk_nix_inl_custom_meta_pool_cb(uintptr_t pmpool, uintptr_t *mpool, const char
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cpfl: fix invalid free in JSON parser' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (19 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
` (99 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=55f413c5ad5c01b4149b239c2342c833d52f77c5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 55f413c5ad5c01b4149b239c2342c833d52f77c5 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:14 -0700
Subject: [PATCH] net/cpfl: fix invalid free in JSON parser
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1c20cf5be5c8b3e09673a44da2ce532ec0f35236 ]
With proper annotation, GCC discovers that this driver is calling
rte_free() on an object that was not allocated
(it is part of array in another object).
In function ‘cpfl_flow_js_mr_layout’,
inlined from ‘cpfl_flow_js_mr_action’ at
../drivers/net/cpfl/cpfl_flow_parser.c:848:9,
inlined from ‘cpfl_flow_js_mod_rule’ at
../drivers/net/cpfl/cpfl_flow_parser.c:908:9,
inlined from ‘cpfl_parser_init’ at
../drivers/net/cpfl/cpfl_flow_parser.c:932:8,
inlined from ‘cpfl_parser_create’ at
../drivers/net/cpfl/cpfl_flow_parser.c:959:8:
../drivers/net/cpfl/cpfl_flow_parser.c:740:9: warning:
‘rte_free’ called on pointer ‘*parser.modifications’ with
nonzero offset [28, 15479062120396] [-Wfree-nonheap-object]
740 | rte_free(js_mod->layout);
| ^~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 6cc97c9971d7 ("net/cpfl: build action mapping rules from JSON")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/cpfl/cpfl_flow_parser.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 011229a470..303e979015 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -737,7 +737,6 @@ cpfl_flow_js_mr_layout(json_t *ob_layouts, struct cpfl_flow_js_mr_action_mod *js
return 0;
err:
- rte_free(js_mod->layout);
return -EINVAL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.355857615 +0800
+++ 0021-net-cpfl-fix-invalid-free-in-JSON-parser.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From 1c20cf5be5c8b3e09673a44da2ce532ec0f35236 Mon Sep 17 00:00:00 2001
+From 55f413c5ad5c01b4149b239c2342c833d52f77c5 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1c20cf5be5c8b3e09673a44da2ce532ec0f35236 ]
@@ -29 +31,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/e1000: fix use after free in filter flush' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (20 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
` (98 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cda329444d492b928d0e9f71c5e3bdfb9acc80e4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cda329444d492b928d0e9f71c5e3bdfb9acc80e4 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:12 -0700
Subject: [PATCH] net/e1000: fix use after free in filter flush
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 58196dc411576925a1d66b0da1d11b06072a7ac2 ]
The driver cleanup code was freeing the filter object then
dereferencing it.
Bugzilla ID: 1550
Fixes: 6a4d050e2855 ("net/igb: flush all the filter")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/e1000/igb_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d64a1aedd3..222e359ed9 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -3857,11 +3857,11 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev,
filter_info->twotuple_mask &= ~(1 << filter->index);
TAILQ_REMOVE(&filter_info->twotuple_list, filter, entries);
- rte_free(filter);
E1000_WRITE_REG(hw, E1000_TTQF(filter->index), E1000_TTQF_DISABLE_MASK);
E1000_WRITE_REG(hw, E1000_IMIR(filter->index), 0);
E1000_WRITE_REG(hw, E1000_IMIREXT(filter->index), 0);
+ rte_free(filter);
return 0;
}
@@ -4298,7 +4298,6 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
filter_info->fivetuple_mask &= ~(1 << filter->index);
TAILQ_REMOVE(&filter_info->fivetuple_list, filter, entries);
- rte_free(filter);
E1000_WRITE_REG(hw, E1000_FTQF(filter->index),
E1000_FTQF_VF_BP | E1000_FTQF_MASK);
@@ -4307,6 +4306,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
E1000_WRITE_REG(hw, E1000_SPQF(filter->index), 0);
E1000_WRITE_REG(hw, E1000_IMIR(filter->index), 0);
E1000_WRITE_REG(hw, E1000_IMIREXT(filter->index), 0);
+ rte_free(filter);
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.389249014 +0800
+++ 0022-net-e1000-fix-use-after-free-in-filter-flush.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From 58196dc411576925a1d66b0da1d11b06072a7ac2 Mon Sep 17 00:00:00 2001
+From cda329444d492b928d0e9f71c5e3bdfb9acc80e4 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 58196dc411576925a1d66b0da1d11b06072a7ac2 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 1e0a483d4a..d3a9181874 100644
+index d64a1aedd3..222e359ed9 100644
@@ -28 +30 @@
-@@ -3907,11 +3907,11 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev,
+@@ -3857,11 +3857,11 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev,
@@ -41 +43 @@
-@@ -4348,7 +4348,6 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
+@@ -4298,7 +4298,6 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
@@ -49 +51 @@
-@@ -4357,6 +4356,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
+@@ -4307,6 +4306,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: fix double free in flow destroy' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (21 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
` (97 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6560fe7e85a85543bd747377f9931aca3f325200
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6560fe7e85a85543bd747377f9931aca3f325200 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:15 -0700
Subject: [PATCH] net/nfp: fix double free in flow destroy
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fae5c633522efd30b6cb2c7a1bdfeb7e19e2f369 ]
Calling rte_free twice on same object will corrupt the heap.
Warning is:
In function 'nfp_pre_tun_table_check_del',
inlined from 'nfp_flow_destroy' at
../drivers/net/nfp/flower/nfp_flower_flow.c:5143:9:
../drivers/net/nfp/flower/nfp_flower_flow.c:3830:9:
error: pointer 'entry' used after 'rte_free'
[-Werror=use-after-free]
3830 | rte_free(entry);
| ^~~~~~~~~~~~~~~
../drivers/net/nfp/flower/nfp_flower_flow.c:3825:9:
note: call to 'rte_free' here
3825 | rte_free(entry);
| ^~~~~~~~~~~~~~~
Bugzilla ID: 1555
Fixes: d3c33bdf1f18 ("net/nfp: prepare for IPv4 UDP tunnel decap flow action")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/nfp/nfp_flow.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 91ebee5db4..13f58b210e 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -3177,7 +3177,6 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
goto free_entry;
}
- rte_free(entry);
rte_free(find_entry);
priv->pre_tun_cnt--;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.424343613 +0800
+++ 0023-net-nfp-fix-double-free-in-flow-destroy.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From fae5c633522efd30b6cb2c7a1bdfeb7e19e2f369 Mon Sep 17 00:00:00 2001
+From 6560fe7e85a85543bd747377f9931aca3f325200 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fae5c633522efd30b6cb2c7a1bdfeb7e19e2f369 ]
@@ -26 +28,0 @@
-Cc: stable@dpdk.org
@@ -33 +35 @@
- drivers/net/nfp/flower/nfp_flower_flow.c | 1 -
+ drivers/net/nfp/nfp_flow.c | 1 -
@@ -36,5 +38,5 @@
-diff --git a/drivers/net/nfp/flower/nfp_flower_flow.c b/drivers/net/nfp/flower/nfp_flower_flow.c
-index 0078455658..64a0062c8b 100644
---- a/drivers/net/nfp/flower/nfp_flower_flow.c
-+++ b/drivers/net/nfp/flower/nfp_flower_flow.c
-@@ -3822,7 +3822,6 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
+diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
+index 91ebee5db4..13f58b210e 100644
+--- a/drivers/net/nfp/nfp_flow.c
++++ b/drivers/net/nfp/nfp_flow.c
+@@ -3177,7 +3177,6 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/sfc: fix use after free in debug logs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (22 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
` (96 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Ivan Malov, Andrew Rybchenko, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5a5a90d8fd00b0afb5d50081df1081412c601514
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5a5a90d8fd00b0afb5d50081df1081412c601514 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:13 -0700
Subject: [PATCH] net/sfc: fix use after free in debug logs
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 757b0b6f207c072a550f43836856235aa41553ad ]
If compiler detection of use-after-free is enabled then this drivers
debug messages will cause warnings. Change to move debug message
before the object is freed.
Bugzilla ID: 1551
Fixes: 55c1238246d5 ("net/sfc: add more debug messages to transfer flows")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ivan Malov <ivan.malov@arknetworks.am>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
| 4 ++--
drivers/net/sfc/sfc_mae.c | 23 +++++++++--------------
2 files changed, 11 insertions(+), 16 deletions(-)
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
index e28c943335..8e2749833b 100644
--- a/drivers/net/sfc/sfc_flow_rss.c
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -303,9 +303,9 @@ sfc_flow_rss_ctx_del(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
TAILQ_REMOVE(&flow_rss->ctx_list, ctx, entries);
rte_free(ctx->qid_offsets);
- rte_free(ctx);
-
sfc_dbg(sa, "flow-rss: deleted ctx=%p", ctx);
+
+ rte_free(ctx);
}
static int
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 60ff6d2181..8f74f10390 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -400,9 +400,8 @@ sfc_mae_outer_rule_del(struct sfc_adapter *sa,
efx_mae_match_spec_fini(sa->nic, rule->match_spec);
TAILQ_REMOVE(&mae->outer_rules, rule, entries);
- rte_free(rule);
-
sfc_dbg(sa, "deleted outer_rule=%p", rule);
+ rte_free(rule);
}
static int
@@ -585,9 +584,8 @@ sfc_mae_mac_addr_del(struct sfc_adapter *sa, struct sfc_mae_mac_addr *mac_addr)
}
TAILQ_REMOVE(&mae->mac_addrs, mac_addr, entries);
- rte_free(mac_addr);
-
sfc_dbg(sa, "deleted mac_addr=%p", mac_addr);
+ rte_free(mac_addr);
}
enum sfc_mae_mac_addr_type {
@@ -785,10 +783,10 @@ sfc_mae_encap_header_del(struct sfc_adapter *sa,
}
TAILQ_REMOVE(&mae->encap_headers, encap_header, entries);
+ sfc_dbg(sa, "deleted encap_header=%p", encap_header);
+
rte_free(encap_header->buf);
rte_free(encap_header);
-
- sfc_dbg(sa, "deleted encap_header=%p", encap_header);
}
static int
@@ -983,9 +981,8 @@ sfc_mae_counter_del(struct sfc_adapter *sa, struct sfc_mae_counter *counter)
}
TAILQ_REMOVE(&mae->counters, counter, entries);
- rte_free(counter);
-
sfc_dbg(sa, "deleted counter=%p", counter);
+ rte_free(counter);
}
static int
@@ -1165,9 +1162,8 @@ sfc_mae_action_set_del(struct sfc_adapter *sa,
sfc_mae_mac_addr_del(sa, action_set->src_mac_addr);
sfc_mae_counter_del(sa, action_set->counter);
TAILQ_REMOVE(&mae->action_sets, action_set, entries);
- rte_free(action_set);
-
sfc_dbg(sa, "deleted action_set=%p", action_set);
+ rte_free(action_set);
}
static int
@@ -1401,10 +1397,10 @@ sfc_mae_action_set_list_del(struct sfc_adapter *sa,
sfc_mae_action_set_del(sa, action_set_list->action_sets[i]);
TAILQ_REMOVE(&mae->action_set_lists, action_set_list, entries);
+ sfc_dbg(sa, "deleted action_set_list=%p", action_set_list);
+
rte_free(action_set_list->action_sets);
rte_free(action_set_list);
-
- sfc_dbg(sa, "deleted action_set_list=%p", action_set_list);
}
static int
@@ -1667,9 +1663,8 @@ sfc_mae_action_rule_del(struct sfc_adapter *sa,
sfc_mae_outer_rule_del(sa, rule->outer_rule);
TAILQ_REMOVE(&mae->action_rules, rule, entries);
- rte_free(rule);
-
sfc_dbg(sa, "deleted action_rule=%p", rule);
+ rte_free(rule);
}
static int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.465646112 +0800
+++ 0024-net-sfc-fix-use-after-free-in-debug-logs.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From 757b0b6f207c072a550f43836856235aa41553ad Mon Sep 17 00:00:00 2001
+From 5a5a90d8fd00b0afb5d50081df1081412c601514 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 757b0b6f207c072a550f43836856235aa41553ad ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'raw/ifpga/base: fix use after free' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (23 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
` (95 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=be5b4c9d2969486ffc3a8606c0b8b2cd77a0169e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From be5b4c9d2969486ffc3a8606c0b8b2cd77a0169e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:16 -0700
Subject: [PATCH] raw/ifpga/base: fix use after free
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 11986223b54d981300e9de2d365c494eb274645c ]
The TAILQ_FOREACH() macro would refer to info after it
had been freed. Fix by introducing TAILQ_FOREACH_SAFE here.
Fixes: 4a19f89104f8 ("raw/ifpga/base: support multiple cards")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/raw/ifpga/base/opae_intel_max10.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/raw/ifpga/base/opae_intel_max10.c b/drivers/raw/ifpga/base/opae_intel_max10.c
index dd97a5f9fd..d5a9ceb6e3 100644
--- a/drivers/raw/ifpga/base/opae_intel_max10.c
+++ b/drivers/raw/ifpga/base/opae_intel_max10.c
@@ -6,6 +6,13 @@
#include <libfdt.h>
#include "opae_osdep.h"
+#ifndef TAILQ_FOREACH_SAFE
+#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = TAILQ_FIRST((head)); \
+ (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+ (var) = (tvar))
+#endif
+
int max10_sys_read(struct intel_max10_device *dev,
unsigned int offset, unsigned int *val)
{
@@ -746,9 +753,9 @@ static int fdt_get_named_reg(const void *fdt, int node, const char *name,
static void max10_sensor_uinit(struct intel_max10_device *dev)
{
- struct opae_sensor_info *info;
+ struct opae_sensor_info *info, *next;
- TAILQ_FOREACH(info, &dev->opae_sensor_list, node) {
+ TAILQ_FOREACH_SAFE(info, &dev->opae_sensor_list, node, next) {
TAILQ_REMOVE(&dev->opae_sensor_list, info, node);
opae_free(info);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.515775011 +0800
+++ 0025-raw-ifpga-base-fix-use-after-free.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From 11986223b54d981300e9de2d365c494eb274645c Mon Sep 17 00:00:00 2001
+From be5b4c9d2969486ffc3a8606c0b8b2cd77a0169e Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 11986223b54d981300e9de2d365c494eb274645c ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'raw/ifpga: fix free function mismatch in interrupt config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (24 preceding siblings ...)
2024-11-11 6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
` (94 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a2cef42f63ee851b84e63187f31783b9f032af5f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a2cef42f63ee851b84e63187f31783b9f032af5f Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:18 -0700
Subject: [PATCH] raw/ifpga: fix free function mismatch in interrupt config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d891a597895bb65db42404440660f82092780750 ]
The raw ifpga driver redefines malloc to be opae_malloc
and free to be opae_free; which is a bad idea.
This leads to case where interrupt efd array is allocated with calloc()
and then passed to rte_free.
The workaround is to allocate the array with rte_calloc() instead.
Fixes: d61138d4f0e2 ("drivers: remove direct access to interrupt handle")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/raw/ifpga/ifpga_rawdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 997fbf8a0d..3b4d771d1b 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -1498,7 +1498,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
nb_intr = rte_intr_nb_intr_get(*intr_handle);
- intr_efds = calloc(nb_intr, sizeof(int));
+ intr_efds = rte_calloc("ifpga_efds", nb_intr, sizeof(int), 0);
if (!intr_efds)
return -ENOMEM;
@@ -1507,7 +1507,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
if (ret) {
- free(intr_efds);
+ rte_free(intr_efds);
return -EINVAL;
}
}
@@ -1516,13 +1516,13 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
ret = rte_intr_callback_register(*intr_handle,
handler, (void *)arg);
if (ret) {
- free(intr_efds);
+ rte_free(intr_efds);
return -EINVAL;
}
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt", name);
- free(intr_efds);
+ rte_free(intr_efds);
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.555587211 +0800
+++ 0026-raw-ifpga-fix-free-function-mismatch-in-interrupt-co.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From d891a597895bb65db42404440660f82092780750 Mon Sep 17 00:00:00 2001
+From a2cef42f63ee851b84e63187f31783b9f032af5f Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d891a597895bb65db42404440660f82092780750 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 113a22b0a7..5b9b596435 100644
+index 997fbf8a0d..3b4d771d1b 100644
@@ -31 +33 @@
-@@ -1499,7 +1499,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1498,7 +1498,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -40 +42 @@
-@@ -1508,7 +1508,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1507,7 +1507,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -49 +51 @@
-@@ -1517,13 +1517,13 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1516,13 +1516,13 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'examples/vhost: fix free function mismatch' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (25 preceding siblings ...)
2024-11-11 6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
` (93 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Chengwen Feng, Chenbo Xia, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=780c6918ffeccf266bf5505dc2b315a348d60d1e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 780c6918ffeccf266bf5505dc2b315a348d60d1e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:09 -0700
Subject: [PATCH] examples/vhost: fix free function mismatch
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ae67f7d0256687fdfb24d27ee94b20d88c65108e ]
The pointer bdev is allocated with rte_zmalloc() and then
incorrectly freed with free() which will lead to pool corruption.
Bugzilla ID: 1553
Fixes: c19beb3f38cd ("examples/vhost_blk: introduce vhost storage sample")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
examples/vhost_blk/vhost_blk.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/examples/vhost_blk/vhost_blk.c b/examples/vhost_blk/vhost_blk.c
index 376f7b89a7..4dc99eb648 100644
--- a/examples/vhost_blk/vhost_blk.c
+++ b/examples/vhost_blk/vhost_blk.c
@@ -776,7 +776,7 @@ vhost_blk_bdev_construct(const char *bdev_name,
bdev->data = rte_zmalloc(NULL, blk_cnt * blk_size, 0);
if (!bdev->data) {
fprintf(stderr, "No enough reserved huge memory for disk\n");
- free(bdev);
+ rte_free(bdev);
return NULL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.589520410 +0800
+++ 0027-examples-vhost-fix-free-function-mismatch.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From ae67f7d0256687fdfb24d27ee94b20d88c65108e Mon Sep 17 00:00:00 2001
+From 780c6918ffeccf266bf5505dc2b315a348d60d1e Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ae67f7d0256687fdfb24d27ee94b20d88c65108e ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 03f1ac9c3f..9c9e326949 100644
+index 376f7b89a7..4dc99eb648 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfb: fix use after free' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (26 preceding siblings ...)
2024-11-11 6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'power: enable CPPC' " Xueming Li
` (92 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: xuemingl, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5facb377a447b0150f17cf19b1d2ab006f721a03
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5facb377a447b0150f17cf19b1d2ab006f721a03 Mon Sep 17 00:00:00 2001
From: Thomas Monjalon <thomas@monjalon.net>
Date: Thu, 10 Oct 2024 19:11:07 +0200
Subject: [PATCH] net/nfb: fix use after free
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 76da9834ebb6e43e005bd5895ff4568d0e7be78f ]
With the annotations added to the allocation functions
in commit 80da7efbb4c4 ("eal: annotate allocation functions"),
more issues are detected at compilation time:
nfb_rx.c:133:28: error: pointer 'rxq' used after 'rte_free'
It is fixed by moving the assignment before freeing the parent pointer.
Fixes: 6435f9a0ac22 ("net/nfb: add new netcope driver")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfb/nfb_tx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 8a9b232305..7941197b77 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -129,7 +129,7 @@ nfb_eth_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (rxq->queue != NULL) {
ndp_close_rx_queue(rxq->queue);
- rte_free(rxq);
rxq->queue = NULL;
+ rte_free(rxq);
}
}
diff --git a/drivers/net/nfb/nfb_tx.c b/drivers/net/nfb/nfb_tx.c
index d49fc324e7..5c38d69934 100644
--- a/drivers/net/nfb/nfb_tx.c
+++ b/drivers/net/nfb/nfb_tx.c
@@ -108,7 +108,7 @@ nfb_eth_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (txq->queue != NULL) {
ndp_close_tx_queue(txq->queue);
- rte_free(txq);
txq->queue = NULL;
+ rte_free(txq);
}
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.621227709 +0800
+++ 0028-net-nfb-fix-use-after-free.patch 2024-11-11 14:23:05.062192841 +0800
@@ -1 +1 @@
-From 76da9834ebb6e43e005bd5895ff4568d0e7be78f Mon Sep 17 00:00:00 2001
+From 5facb377a447b0150f17cf19b1d2ab006f721a03 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 76da9834ebb6e43e005bd5895ff4568d0e7be78f ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index f72afafe8f..462bc3b50d 100644
+index 8a9b232305..7941197b77 100644
@@ -38 +40 @@
-index a1318a4205..cf99268c43 100644
+index d49fc324e7..5c38d69934 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'power: enable CPPC' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (27 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
` (91 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Wathsala Vithanage; +Cc: xuemingl, Dhruv Tripathi, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=418efc7dd043b02b6666fe70b88613dd8984bb98
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 418efc7dd043b02b6666fe70b88613dd8984bb98 Mon Sep 17 00:00:00 2001
From: Wathsala Vithanage <wathsala.vithanage@arm.com>
Date: Thu, 10 Oct 2024 14:17:36 +0000
Subject: [PATCH] power: enable CPPC
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 35220c7cb3aff022b3a41919139496326ef6eecc ]
Power library already supports Linux CPPC driver,
but initialization was failing.
Enable its use in the drivers check,
and fix the name of the CPPC driver name.
Fixes: ef1cc88f1837 ("power: support cppc_cpufreq driver")
Signed-off-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com>
---
lib/power/power_cppc_cpufreq.c | 2 +-
lib/power/rte_power_pmd_mgmt.c | 11 ++++++-----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c
index bb70f6ae52..f2ba684c83 100644
--- a/lib/power/power_cppc_cpufreq.c
+++ b/lib/power/power_cppc_cpufreq.c
@@ -36,7 +36,7 @@
#define POWER_SYSFILE_SYS_MAX \
"/sys/devices/system/cpu/cpu%u/cpufreq/cpuinfo_max_freq"
-#define POWER_CPPC_DRIVER "cppc-cpufreq"
+#define POWER_CPPC_DRIVER "cppc_cpufreq"
#define BUS_FREQ 100000
enum power_state {
diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 6f18ed0adf..20aa753c3a 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -419,11 +419,12 @@ check_scale(unsigned int lcore)
{
enum power_management_env env;
- /* only PSTATE and ACPI modes are supported */
+ /* only PSTATE, AMD-PSTATE, ACPI and CPPC modes are supported */
if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) &&
!rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) &&
- !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) {
- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n");
+ !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ) &&
+ !rte_power_check_env_supported(PM_ENV_CPPC_CPUFREQ)) {
+ RTE_LOG(DEBUG, POWER, "Only ACPI, PSTATE, AMD-PSTATE, or CPPC modes are supported\n");
return -ENOTSUP;
}
/* ensure we could initialize the power library */
@@ -433,8 +434,8 @@ check_scale(unsigned int lcore)
/* ensure we initialized the correct env */
env = rte_power_get_env();
if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ &&
- env != PM_ENV_AMD_PSTATE_CPUFREQ) {
- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n");
+ env != PM_ENV_AMD_PSTATE_CPUFREQ && env != PM_ENV_CPPC_CPUFREQ) {
+ RTE_LOG(DEBUG, POWER, "Unable to initialize ACPI, PSTATE, AMD-PSTATE, or CPPC modes\n");
return -ENOTSUP;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.652309809 +0800
+++ 0029-power-enable-CPPC.patch 2024-11-11 14:23:05.062192841 +0800
@@ -1 +1 @@
-From 35220c7cb3aff022b3a41919139496326ef6eecc Mon Sep 17 00:00:00 2001
+From 418efc7dd043b02b6666fe70b88613dd8984bb98 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 35220c7cb3aff022b3a41919139496326ef6eecc ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 32aaacb948..e68b39b424 100644
+index bb70f6ae52..f2ba684c83 100644
@@ -35 +37 @@
-index b1c18a5f56..830a6c7a97 100644
+index 6f18ed0adf..20aa753c3a 100644
@@ -47 +49 @@
-- POWER_LOG(DEBUG, "Neither ACPI nor PSTATE modes are supported");
+- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n");
@@ -50 +52 @@
-+ POWER_LOG(DEBUG, "Only ACPI, PSTATE, AMD-PSTATE, or CPPC modes are supported");
++ RTE_LOG(DEBUG, POWER, "Only ACPI, PSTATE, AMD-PSTATE, or CPPC modes are supported\n");
@@ -59 +61 @@
-- POWER_LOG(DEBUG, "Neither ACPI nor PSTATE modes were initialized");
+- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n");
@@ -61 +63 @@
-+ POWER_LOG(DEBUG, "Unable to initialize ACPI, PSTATE, AMD-PSTATE, or CPPC modes");
++ RTE_LOG(DEBUG, POWER, "Unable to initialize ACPI, PSTATE, AMD-PSTATE, or CPPC modes\n");
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'fib6: add runtime checks in AVX512 lookup' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (28 preceding siblings ...)
2024-11-11 6:27 ` patch 'power: enable CPPC' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
` (90 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: xuemingl, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f2905ef63cc3d39bb2d3bd6156613ee5c4479e1f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f2905ef63cc3d39bb2d3bd6156613ee5c4479e1f Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Tue, 8 Oct 2024 17:31:36 +0000
Subject: [PATCH] fib6: add runtime checks in AVX512 lookup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 45ddc5660f9830f3b7b39ddaf57af02e80d589a4 ]
AVX512 lookup function requires CPU to support RTE_CPUFLAG_AVX512DQ and
RTE_CPUFLAG_AVX512BW. Add runtime checks of these two flags when deciding
if vector function can be used.
Fixes: 1e5630e40d95 ("fib6: add AVX512 lookup")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
lib/fib/trie.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/fib/trie.c b/lib/fib/trie.c
index 09470e7287..7b33cdaa7b 100644
--- a/lib/fib/trie.c
+++ b/lib/fib/trie.c
@@ -46,8 +46,10 @@ static inline rte_fib6_lookup_fn_t
get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
{
#ifdef CC_TRIE_AVX512_SUPPORT
- if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
- (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0 ||
+ rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ) <= 0 ||
+ rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) <= 0 ||
+ rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512)
return NULL;
switch (nh_sz) {
case RTE_FIB6_TRIE_2B:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.699294108 +0800
+++ 0030-fib6-add-runtime-checks-in-AVX512-lookup.patch 2024-11-11 14:23:05.072192841 +0800
@@ -1 +1 @@
-From 45ddc5660f9830f3b7b39ddaf57af02e80d589a4 Mon Sep 17 00:00:00 2001
+From f2905ef63cc3d39bb2d3bd6156613ee5c4479e1f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 45ddc5660f9830f3b7b39ddaf57af02e80d589a4 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'pcapng: fix handling of chained mbufs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (29 preceding siblings ...)
2024-11-11 6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
` (89 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Oleksandr Nahnybida; +Cc: xuemingl, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e989eae1c9a4ec9a7fdf8014a58cdd0a241a836f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e989eae1c9a4ec9a7fdf8014a58cdd0a241a836f Mon Sep 17 00:00:00 2001
From: Oleksandr Nahnybida <oleksandrn@interfacemasters.com>
Date: Fri, 13 Sep 2024 15:34:03 +0300
Subject: [PATCH] pcapng: fix handling of chained mbufs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6db358536fee7891b5cb670df94ec87543ddd0fb ]
The pcapng generates corrupted files when dealing with chained mbufs.
This issue arises because in rte_pcapng_copy the length of the EPB block
is incorrectly calculated using the data_len of the first mbuf instead
of the pkt_len, despite that rte_pcapng_write_packets correctly writing
the mbuf chain to disk.
This fix ensures that the block length is calculated based on the pkt_len,
aligning it with the actual data written to disk.
Fixes: 8d23ce8f5ee9 ("pcapng: add new library for writing pcapng files")
Signed-off-by: Oleksandr Nahnybida <oleksandrn@interfacemasters.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
.mailmap | 1 +
app/test/test_pcapng.c | 12 ++++++++++--
lib/pcapng/rte_pcapng.c | 12 ++++++------
3 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/.mailmap b/.mailmap
index 8b9e849d05..4022645615 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1059,6 +1059,7 @@ Odi Assli <odia@nvidia.com>
Ognjen Joldzic <ognjen.joldzic@gmail.com>
Ola Liljedahl <ola.liljedahl@arm.com>
Oleg Polyakov <olegp123@walla.co.il>
+Oleksandr Nahnybida <oleksandrn@interfacemasters.com>
Olga Shern <olgas@nvidia.com> <olgas@mellanox.com>
Olivier Gournet <ogournet@corp.free.fr>
Olivier Matz <olivier.matz@6wind.com>
diff --git a/app/test/test_pcapng.c b/app/test/test_pcapng.c
index 89535efad0..5cdde0542a 100644
--- a/app/test/test_pcapng.c
+++ b/app/test/test_pcapng.c
@@ -102,6 +102,14 @@ mbuf1_prepare(struct dummy_mbuf *dm, uint32_t plen)
pkt.udp.dgram_len = rte_cpu_to_be_16(plen);
memcpy(rte_pktmbuf_mtod(dm->mb, void *), &pkt, sizeof(pkt));
+
+ /* Idea here is to create mbuf chain big enough that after mbuf deep copy they won't be
+ * compressed into single mbuf to properly test store of chained mbufs
+ */
+ dummy_mbuf_prep(&dm->mb[1], dm->buf[1], sizeof(dm->buf[1]), pkt_len);
+ dummy_mbuf_prep(&dm->mb[2], dm->buf[2], sizeof(dm->buf[2]), pkt_len);
+ rte_pktmbuf_chain(&dm->mb[0], &dm->mb[1]);
+ rte_pktmbuf_chain(&dm->mb[0], &dm->mb[2]);
}
static int
@@ -117,7 +125,7 @@ test_setup(void)
/* Make a pool for cloned packets */
mp = rte_pktmbuf_pool_create_by_ops("pcapng_test_pool",
- MAX_BURST, 0, 0,
+ MAX_BURST * 32, 0, 0,
rte_pcapng_mbuf_size(pkt_len) + 128,
SOCKET_ID_ANY, "ring_mp_sc");
if (mp == NULL) {
@@ -155,7 +163,7 @@ fill_pcapng_file(rte_pcapng_t *pcapng, unsigned int num_packets)
for (i = 0; i < burst_size; i++) {
struct rte_mbuf *mc;
- mc = rte_pcapng_copy(port_id, 0, orig, mp, pkt_len,
+ mc = rte_pcapng_copy(port_id, 0, orig, mp, rte_pktmbuf_pkt_len(orig),
RTE_PCAPNG_DIRECTION_IN, NULL);
if (mc == NULL) {
fprintf(stderr, "Cannot copy packet\n");
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index 7254defce7..e5326c1d38 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -475,7 +475,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
const char *comment)
{
struct pcapng_enhance_packet_block *epb;
- uint32_t orig_len, data_len, padding, flags;
+ uint32_t orig_len, pkt_len, padding, flags;
struct pcapng_option *opt;
uint64_t timestamp;
uint16_t optlen;
@@ -516,8 +516,8 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
(md->ol_flags & RTE_MBUF_F_RX_RSS_HASH));
/* pad the packet to 32 bit boundary */
- data_len = rte_pktmbuf_data_len(mc);
- padding = RTE_ALIGN(data_len, sizeof(uint32_t)) - data_len;
+ pkt_len = rte_pktmbuf_pkt_len(mc);
+ padding = RTE_ALIGN(pkt_len, sizeof(uint32_t)) - pkt_len;
if (padding > 0) {
void *tail = rte_pktmbuf_append(mc, padding);
@@ -584,7 +584,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
goto fail;
epb->block_type = PCAPNG_ENHANCED_PACKET_BLOCK;
- epb->block_length = rte_pktmbuf_data_len(mc);
+ epb->block_length = rte_pktmbuf_pkt_len(mc);
/* Interface index is filled in later during write */
mc->port = port_id;
@@ -593,7 +593,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
timestamp = rte_get_tsc_cycles();
epb->timestamp_hi = timestamp >> 32;
epb->timestamp_lo = (uint32_t)timestamp;
- epb->capture_length = data_len;
+ epb->capture_length = pkt_len;
epb->original_length = orig_len;
/* set trailer of block length */
@@ -623,7 +623,7 @@ rte_pcapng_write_packets(rte_pcapng_t *self,
/* sanity check that is really a pcapng mbuf */
epb = rte_pktmbuf_mtod(m, struct pcapng_enhance_packet_block *);
if (unlikely(epb->block_type != PCAPNG_ENHANCED_PACKET_BLOCK ||
- epb->block_length != rte_pktmbuf_data_len(m))) {
+ epb->block_length != rte_pktmbuf_pkt_len(m))) {
rte_errno = EINVAL;
return -1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.730371307 +0800
+++ 0031-pcapng-fix-handling-of-chained-mbufs.patch 2024-11-11 14:23:05.072192841 +0800
@@ -1 +1 @@
-From 6db358536fee7891b5cb670df94ec87543ddd0fb Mon Sep 17 00:00:00 2001
+From e989eae1c9a4ec9a7fdf8014a58cdd0a241a836f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6db358536fee7891b5cb670df94ec87543ddd0fb ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 62ef194168..aee7c91780 100644
+index 8b9e849d05..4022645615 100644
@@ -30 +32,2 @@
-@@ -1091,6 +1091,7 @@ Ognjen Joldzic <ognjen.joldzic@gmail.com>
+@@ -1059,6 +1059,7 @@ Odi Assli <odia@nvidia.com>
+ Ognjen Joldzic <ognjen.joldzic@gmail.com>
@@ -33 +35,0 @@
- Oleksandr Kolomeiets <okl-plv@napatech.com>
@@ -39 +41 @@
-index 2665b08c76..b219873c3a 100644
+index 89535efad0..5cdde0542a 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'app/dumpcap: fix handling of jumbo frames' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (30 preceding siblings ...)
2024-11-11 6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
` (88 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Tianli Lai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=09c32b20ec3718838ed7e0e089dc2e7104770e40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 09c32b20ec3718838ed7e0e089dc2e7104770e40 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 3 Oct 2024 15:09:03 -0700
Subject: [PATCH] app/dumpcap: fix handling of jumbo frames
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5c0f970c0d0e2a963a7a970a71cad4f4244414a5 ]
If dumpcap (in legacy pcap mode) tried to handle a large segmented
frame it would core dump because rte_pktmbuf_read() would return NULL.
Fix by using same logic as in pcap PMD.
Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
Reported-by: Tianli Lai <laitianli@tom.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/dumpcap/main.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 76c7475114..213e764c2e 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -874,7 +874,7 @@ static ssize_t
pcap_write_packets(pcap_dumper_t *dumper,
struct rte_mbuf *pkts[], uint16_t n)
{
- uint8_t temp_data[RTE_MBUF_DEFAULT_BUF_SIZE];
+ uint8_t temp_data[RTE_ETHER_MAX_JUMBO_FRAME_LEN];
struct pcap_pkthdr header;
uint16_t i;
size_t total = 0;
@@ -883,14 +883,19 @@ pcap_write_packets(pcap_dumper_t *dumper,
for (i = 0; i < n; i++) {
struct rte_mbuf *m = pkts[i];
+ size_t len, caplen;
- header.len = rte_pktmbuf_pkt_len(m);
- header.caplen = RTE_MIN(header.len, sizeof(temp_data));
+ len = caplen = rte_pktmbuf_pkt_len(m);
+ if (unlikely(!rte_pktmbuf_is_contiguous(m) && len > sizeof(temp_data)))
+ caplen = sizeof(temp_data);
+
+ header.len = len;
+ header.caplen = caplen;
pcap_dump((u_char *)dumper, &header,
- rte_pktmbuf_read(m, 0, header.caplen, temp_data));
+ rte_pktmbuf_read(m, 0, caplen, temp_data));
- total += sizeof(header) + header.len;
+ total += sizeof(header) + caplen;
}
return total;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.767246206 +0800
+++ 0032-app-dumpcap-fix-handling-of-jumbo-frames.patch 2024-11-11 14:23:05.082192840 +0800
@@ -1 +1 @@
-From 5c0f970c0d0e2a963a7a970a71cad4f4244414a5 Mon Sep 17 00:00:00 2001
+From 09c32b20ec3718838ed7e0e089dc2e7104770e40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5c0f970c0d0e2a963a7a970a71cad4f4244414a5 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 6feb8f5672..fcfaa19951 100644
+index 76c7475114..213e764c2e 100644
@@ -23 +25 @@
-@@ -902,7 +902,7 @@ static ssize_t
+@@ -874,7 +874,7 @@ static ssize_t
@@ -32 +34 @@
-@@ -911,14 +911,19 @@ pcap_write_packets(pcap_dumper_t *dumper,
+@@ -883,14 +883,19 @@ pcap_write_packets(pcap_dumper_t *dumper,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'ml/cnxk: fix handling of TVM model I/O' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (31 preceding siblings ...)
2024-11-11 6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
` (87 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Srikanth Yalavarthi; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e8a44520dc4a80da0f3d64f4fb9900820c783893
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e8a44520dc4a80da0f3d64f4fb9900820c783893 Mon Sep 17 00:00:00 2001
From: Srikanth Yalavarthi <syalavarthi@marvell.com>
Date: Tue, 30 Jul 2024 22:41:03 -0700
Subject: [PATCH] ml/cnxk: fix handling of TVM model I/O
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c4636d36bc2cc3a370200245da69006d6f5d9852 ]
Fixed incorrect handling of TVM models with single MRVL
layer. Set the I/O layout to packed and fixed calculation
of quantized and dequantized data buffer addresses.
Fixes: 5cea2c67edfc ("ml/cnxk: update internal TVM model info structure")
Fixes: df2358f3adce ("ml/cnxk: add structures for TVM model type")
Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
drivers/ml/cnxk/cnxk_ml_ops.c | 12 ++++++++----
drivers/ml/cnxk/mvtvm_ml_model.c | 2 +-
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c
index 7bd73727e1..8863633155 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.c
+++ b/drivers/ml/cnxk/cnxk_ml_ops.c
@@ -1462,7 +1462,8 @@ cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buf
d_offset = 0;
q_offset = 0;
for (i = 0; i < info->nb_inputs; i++) {
- if (model->type == ML_CNXK_MODEL_TYPE_TVM) {
+ if (model->type == ML_CNXK_MODEL_TYPE_TVM &&
+ model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) {
lcl_dbuffer = dbuffer[i]->addr;
lcl_qbuffer = qbuffer[i]->addr;
} else {
@@ -1474,7 +1475,8 @@ cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buf
if (ret < 0)
return ret;
- if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
+ if ((model->type == ML_CNXK_MODEL_TYPE_GLOW) ||
+ (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) {
d_offset += info->input[i].sz_d;
q_offset += info->input[i].sz_q;
}
@@ -1516,7 +1518,8 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b
q_offset = 0;
d_offset = 0;
for (i = 0; i < info->nb_outputs; i++) {
- if (model->type == ML_CNXK_MODEL_TYPE_TVM) {
+ if (model->type == ML_CNXK_MODEL_TYPE_TVM &&
+ model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) {
lcl_qbuffer = qbuffer[i]->addr;
lcl_dbuffer = dbuffer[i]->addr;
} else {
@@ -1528,7 +1531,8 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b
if (ret < 0)
return ret;
- if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
+ if ((model->type == ML_CNXK_MODEL_TYPE_GLOW) ||
+ (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) {
q_offset += info->output[i].sz_q;
d_offset += info->output[i].sz_d;
}
diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c
index 0dbe08e988..bbda907714 100644
--- a/drivers/ml/cnxk/mvtvm_ml_model.c
+++ b/drivers/ml/cnxk/mvtvm_ml_model.c
@@ -352,7 +352,7 @@ tvm_mrvl_model:
metadata = &model->mvtvm.metadata;
strlcpy(info->name, metadata->model.name, TVMDP_NAME_STRLEN);
- info->io_layout = RTE_ML_IO_LAYOUT_SPLIT;
+ info->io_layout = RTE_ML_IO_LAYOUT_PACKED;
}
void
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.803313506 +0800
+++ 0033-ml-cnxk-fix-handling-of-TVM-model-I-O.patch 2024-11-11 14:23:05.082192840 +0800
@@ -1 +1 @@
-From c4636d36bc2cc3a370200245da69006d6f5d9852 Mon Sep 17 00:00:00 2001
+From e8a44520dc4a80da0f3d64f4fb9900820c783893 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c4636d36bc2cc3a370200245da69006d6f5d9852 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -65 +67 @@
-index 3ada6f42db..3c5ab0d2e1 100644
+index 0dbe08e988..bbda907714 100644
@@ -68 +70 @@
-@@ -356,7 +356,7 @@ tvm_mrvl_model:
+@@ -352,7 +352,7 @@ tvm_mrvl_model:
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cnxk: fix Rx timestamp handling for VF' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (32 preceding siblings ...)
2024-11-11 6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
` (86 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=54799745107bd25078888a57d1e185e57763075b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 54799745107bd25078888a57d1e185e57763075b Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:40 +0530
Subject: [PATCH] net/cnxk: fix Rx timestamp handling for VF
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0efd93a2740d1ab13fc55656ce9e55f79e09c4f3 ]
When timestamp is enabled on PF in kernel and respective
VF is attached to application in DPDK then mbuf_addr is getting
corrupted in cnxk_nix_timestamp_dynfield() as
"tstamp_dynfield_offset" is zero for PTP enabled PF
This patch fixes the same.
Fixes: 76dff63874e3 ("net/cnxk: support base PTP timesync")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 12 +++++++++++-
drivers/net/cnxk/cn9k_ethdev.c | 12 +++++++++++-
drivers/net/cnxk/cnxk_ethdev.c | 2 +-
3 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 29b7f2ba5e..24c4c2d15e 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -473,7 +473,7 @@ cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
struct cnxk_eth_dev *dev = (struct cnxk_eth_dev *)nix;
struct rte_eth_dev *eth_dev;
struct cn10k_eth_rxq *rxq;
- int i;
+ int i, rc;
if (!dev)
return -EINVAL;
@@ -496,7 +496,17 @@ cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
* and MTU setting also requires MBOX message to be
* sent(VF->PF)
*/
+ if (dev->ptp_en) {
+ rc = rte_mbuf_dyn_rx_timestamp_register
+ (&dev->tstamp.tstamp_dynfield_offset,
+ &dev->tstamp.rx_tstamp_dynflag);
+ if (rc != 0) {
+ plt_err("Failed to register Rx timestamp field/flag");
+ return -EINVAL;
+ }
+ }
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
+ rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index b92b978a27..c06764d745 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -432,7 +432,7 @@ cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
struct cnxk_eth_dev *dev = (struct cnxk_eth_dev *)nix;
struct rte_eth_dev *eth_dev;
struct cn9k_eth_rxq *rxq;
- int i;
+ int i, rc;
if (!dev)
return -EINVAL;
@@ -455,7 +455,17 @@ cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
* and MTU setting also requires MBOX message to be
* sent(VF->PF)
*/
+ if (dev->ptp_en) {
+ rc = rte_mbuf_dyn_rx_timestamp_register
+ (&dev->tstamp.tstamp_dynfield_offset,
+ &dev->tstamp.rx_tstamp_dynflag);
+ if (rc != 0) {
+ plt_err("Failed to register Rx timestamp field/flag");
+ return -EINVAL;
+ }
+ }
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
+ rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 60baf806ab..f0cf376e7d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1734,7 +1734,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP || dev->ptp_en) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.847757505 +0800
+++ 0034-net-cnxk-fix-Rx-timestamp-handling-for-VF.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From 0efd93a2740d1ab13fc55656ce9e55f79e09c4f3 Mon Sep 17 00:00:00 2001
+From 54799745107bd25078888a57d1e185e57763075b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0efd93a2740d1ab13fc55656ce9e55f79e09c4f3 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index ad6bc1ec21..46476e386a 100644
+index 29b7f2ba5e..24c4c2d15e 100644
@@ -54 +56 @@
-index 84c88655f8..5417628368 100644
+index b92b978a27..c06764d745 100644
@@ -85 +87 @@
-index 33bac55704..74b266ad58 100644
+index 60baf806ab..f0cf376e7d 100644
@@ -88 +90 @@
-@@ -1751,7 +1751,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
+@@ -1734,7 +1734,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cnxk: fix Rx offloads to handle timestamp' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (33 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
` (85 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=118f84d541146b7a073ca53849069f4be15b7036
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 118f84d541146b7a073ca53849069f4be15b7036 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:41 +0530
Subject: [PATCH] net/cnxk: fix Rx offloads to handle timestamp
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f12dab814f0898c661d32f6cdaaae6a11bbacb6e ]
RX offloads flags are updated to handle timestamp
in VF when PTP is enabled in respective PF in kernel.
Fixes: c7c7c8ed7d47 ("net/cnxk: get PTP status")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 6 +++++-
drivers/net/cnxk/cn9k_ethdev.c | 5 ++++-
drivers/net/cnxk/cnxk_ethdev.h | 7 +++++++
3 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 24c4c2d15e..3b7de891e0 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -30,7 +30,7 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -508,6 +508,10 @@ cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
+ if (dev->cnxk_sso_ptp_tstamp_cb)
+ dev->cnxk_sso_ptp_tstamp_cb(eth_dev->data->port_id,
+ NIX_RX_OFFLOAD_TSTAMP_F, dev->ptp_en);
+
}
return 0;
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index c06764d745..dee0abdac5 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -30,7 +30,7 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -467,6 +467,9 @@ cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
+ if (dev->cnxk_sso_ptp_tstamp_cb)
+ dev->cnxk_sso_ptp_tstamp_cb(eth_dev->data->port_id,
+ NIX_RX_OFFLOAD_TSTAMP_F, dev->ptp_en);
}
return 0;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 4d3ebf123b..edbb492e2c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -424,6 +424,13 @@ struct cnxk_eth_dev {
/* MCS device */
struct cnxk_mcs_dev *mcs_dev;
struct cnxk_macsec_sess_list mcs_list;
+
+ /* SSO event dev */
+ void *evdev_priv;
+
+ /* SSO event dev ptp */
+ void (*cnxk_sso_ptp_tstamp_cb)
+ (uint16_t port_id, uint16_t flags, bool ptp_en);
};
struct cnxk_eth_rxq_sp {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.885498904 +0800
+++ 0035-net-cnxk-fix-Rx-offloads-to-handle-timestamp.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From f12dab814f0898c661d32f6cdaaae6a11bbacb6e Mon Sep 17 00:00:00 2001
+From 118f84d541146b7a073ca53849069f4be15b7036 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f12dab814f0898c661d32f6cdaaae6a11bbacb6e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 46476e386a..f5b485650e 100644
+index 24c4c2d15e..3b7de891e0 100644
@@ -44 +46 @@
-index 5417628368..c419593a23 100644
+index c06764d745..dee0abdac5 100644
@@ -67 +69 @@
-index 687c60c27d..5920488e1a 100644
+index 4d3ebf123b..edbb492e2c 100644
@@ -70,4 +72,4 @@
-@@ -433,6 +433,13 @@ struct cnxk_eth_dev {
-
- /* Eswitch domain ID */
- uint16_t switch_domain_id;
+@@ -424,6 +424,13 @@ struct cnxk_eth_dev {
+ /* MCS device */
+ struct cnxk_mcs_dev *mcs_dev;
+ struct cnxk_macsec_sess_list mcs_list;
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'event/cnxk: fix Rx timestamp handling' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (34 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
` (84 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7b06dc0a409f787ba2f45c98d97340effa6a6ff1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7b06dc0a409f787ba2f45c98d97340effa6a6ff1 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:42 +0530
Subject: [PATCH] event/cnxk: fix Rx timestamp handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 697883bcb0a84f06b52064ecbf60c619edbf9083 ]
Handle timestamp correctly for VF when PTP is enabled
before running application in event mode by updating
RX offload flags in link up notification.
Fixes: f1cdb3c5b616 ("net/cnxk: enable PTP for event Rx adapter")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 32 ++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 31 +++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 2 +-
3 files changed, 64 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index bb0c910553..9f1d01f048 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -782,12 +782,40 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
}
}
+static void
+eventdev_fops_tstamp_update(struct rte_eventdev *event_dev)
+{
+ struct rte_event_fp_ops *fp_op =
+ rte_event_fp_ops + event_dev->data->dev_id;
+
+ fp_op->dequeue = event_dev->dequeue;
+ fp_op->dequeue_burst = event_dev->dequeue_burst;
+}
+
+static void
+cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct cnxk_eth_dev *cnxk_eth_dev = dev->data->dev_private;
+ struct rte_eventdev *event_dev = cnxk_eth_dev->evdev_priv;
+ struct cnxk_sso_evdev *evdev = cnxk_sso_pmd_priv(event_dev);
+
+ evdev->rx_offloads |= flags;
+ if (ptp_en)
+ evdev->tstamp[port_id] = &cnxk_eth_dev->tstamp;
+ else
+ evdev->tstamp[port_id] = NULL;
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+ eventdev_fops_tstamp_update(event_dev);
+}
+
static int
cn10k_sso_rx_adapter_queue_add(
const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
int32_t rx_queue_id,
const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
struct roc_sso_hwgrp_stash stash;
struct cn10k_eth_rxq *rxq;
@@ -802,6 +830,10 @@ cn10k_sso_rx_adapter_queue_add(
queue_conf);
if (rc)
return -EINVAL;
+
+ cnxk_eth_dev->cnxk_sso_ptp_tstamp_cb = cn10k_sso_tstamp_hdl_update;
+ cnxk_eth_dev->evdev_priv = (struct rte_eventdev *)(uintptr_t)event_dev;
+
rxq = eth_dev->data->rx_queues[0];
lookup_mem = rxq->lookup_mem;
cn10k_sso_set_priv_mem(event_dev, lookup_mem);
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 9fb9ca0d63..ec3022b38c 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -834,12 +834,40 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
}
}
+static void
+eventdev_fops_tstamp_update(struct rte_eventdev *event_dev)
+{
+ struct rte_event_fp_ops *fp_op =
+ rte_event_fp_ops + event_dev->data->dev_id;
+
+ fp_op->dequeue = event_dev->dequeue;
+ fp_op->dequeue_burst = event_dev->dequeue_burst;
+}
+
+static void
+cn9k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct cnxk_eth_dev *cnxk_eth_dev = dev->data->dev_private;
+ struct rte_eventdev *event_dev = cnxk_eth_dev->evdev_priv;
+ struct cnxk_sso_evdev *evdev = cnxk_sso_pmd_priv(event_dev);
+
+ evdev->rx_offloads |= flags;
+ if (ptp_en)
+ evdev->tstamp[port_id] = &cnxk_eth_dev->tstamp;
+ else
+ evdev->tstamp[port_id] = NULL;
+ cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+ eventdev_fops_tstamp_update(event_dev);
+}
+
static int
cn9k_sso_rx_adapter_queue_add(
const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
int32_t rx_queue_id,
const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
struct cn9k_eth_rxq *rxq;
void *lookup_mem;
int rc;
@@ -853,6 +881,9 @@ cn9k_sso_rx_adapter_queue_add(
if (rc)
return -EINVAL;
+ cnxk_eth_dev->cnxk_sso_ptp_tstamp_cb = cn9k_sso_tstamp_hdl_update;
+ cnxk_eth_dev->evdev_priv = (struct rte_eventdev *)(uintptr_t)event_dev;
+
rxq = eth_dev->data->rx_queues[0];
lookup_mem = rxq->lookup_mem;
cn9k_sso_set_priv_mem(event_dev, lookup_mem);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index 92aea92389..fe905b5461 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -212,7 +212,7 @@ static void
cnxk_sso_tstamp_cfg(uint16_t port_id, struct cnxk_eth_dev *cnxk_eth_dev,
struct cnxk_sso_evdev *dev)
{
- if (cnxk_eth_dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ if (cnxk_eth_dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP || cnxk_eth_dev->ptp_en)
dev->tstamp[port_id] = &cnxk_eth_dev->tstamp;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.922928903 +0800
+++ 0036-event-cnxk-fix-Rx-timestamp-handling.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From 697883bcb0a84f06b52064ecbf60c619edbf9083 Mon Sep 17 00:00:00 2001
+From 7b06dc0a409f787ba2f45c98d97340effa6a6ff1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 697883bcb0a84f06b52064ecbf60c619edbf9083 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 5bd779990e..c8767a1b2b 100644
+index bb0c910553..9f1d01f048 100644
@@ -24 +26 @@
-@@ -842,12 +842,40 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+@@ -782,12 +782,40 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
@@ -65 +67 @@
-@@ -862,6 +890,10 @@ cn10k_sso_rx_adapter_queue_add(
+@@ -802,6 +830,10 @@ cn10k_sso_rx_adapter_queue_add(
@@ -77 +79 @@
-index 28350d1275..377e910837 100644
+index 9fb9ca0d63..ec3022b38c 100644
@@ -80 +82 @@
-@@ -911,12 +911,40 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+@@ -834,12 +834,40 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
@@ -121 +123 @@
-@@ -930,6 +958,9 @@ cn9k_sso_rx_adapter_queue_add(
+@@ -853,6 +881,9 @@ cn9k_sso_rx_adapter_queue_add(
@@ -132 +134 @@
-index 2c049e7041..3cac42111a 100644
+index 92aea92389..fe905b5461 100644
@@ -135 +137 @@
-@@ -213,7 +213,7 @@ static void
+@@ -212,7 +212,7 @@ static void
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/cnxk: fix MAC address change with active VF' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (35 preceding siblings ...)
2024-11-11 6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
` (83 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1ba168d355ced3b1fb8513939d0dce8df4a5c8f1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1ba168d355ced3b1fb8513939d0dce8df4a5c8f1 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com>
Date: Tue, 1 Oct 2024 11:30:45 +0530
Subject: [PATCH] common/cnxk: fix MAC address change with active VF
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2d4505dc6d4b541710f1c178ee0b309fab4d2ee8 ]
If device is in reconfigure state then it throws error while
changing default MAC or adding new MAC in LMAC filter table
if there are active VFs on a PF.
Allowing MAC address set/add even active VFs are present on
PF.
Fixes: 313cc41830ec ("common/cnxk: support NIX MAC operations")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/roc_nix_mac.c | 10 ----------
1 file changed, 10 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_mac.c b/drivers/common/cnxk/roc_nix_mac.c
index 2d1c29dd66..ce3fb034c5 100644
--- a/drivers/common/cnxk/roc_nix_mac.c
+++ b/drivers/common/cnxk/roc_nix_mac.c
@@ -91,11 +91,6 @@ roc_nix_mac_addr_set(struct roc_nix *roc_nix, const uint8_t addr[])
goto exit;
}
- if (dev_active_vfs(&nix->dev)) {
- rc = NIX_ERR_OP_NOTSUP;
- goto exit;
- }
-
req = mbox_alloc_msg_cgx_mac_addr_set(mbox);
if (req == NULL)
goto exit;
@@ -152,11 +147,6 @@ roc_nix_mac_addr_add(struct roc_nix *roc_nix, uint8_t addr[])
goto exit;
}
- if (dev_active_vfs(&nix->dev)) {
- rc = NIX_ERR_OP_NOTSUP;
- goto exit;
- }
-
req = mbox_alloc_msg_cgx_mac_addr_add(mbox);
mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.954124602 +0800
+++ 0037-common-cnxk-fix-MAC-address-change-with-active-VF.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From 2d4505dc6d4b541710f1c178ee0b309fab4d2ee8 Mon Sep 17 00:00:00 2001
+From 1ba168d355ced3b1fb8513939d0dce8df4a5c8f1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2d4505dc6d4b541710f1c178ee0b309fab4d2ee8 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 0ffd05e4d4..54db1adf17 100644
+index 2d1c29dd66..ce3fb034c5 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/cnxk: fix inline CTX write' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (36 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
` (82 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=06df4e3ae1eaaabc9d325cd40ab792a75d2ae3ba
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 06df4e3ae1eaaabc9d325cd40ab792a75d2ae3ba Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Tue, 1 Oct 2024 11:30:47 +0530
Subject: [PATCH] common/cnxk: fix inline CTX write
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6c3de40af8362d2d7eede3b4fd12075fce964f4d ]
Reading a CPT_LF_CTX_ERR csr will ensure writes for
FLUSH are complete and also tell whether flush is
complete or not.
Fixes: 71213a8b773c ("common/cnxk: support CPT CTX write through microcode op")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index bc9cc2f429..ba51ddd8c8 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -1669,6 +1669,7 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
struct nix_inl_dev *inl_dev = NULL;
struct roc_cpt_lf *outb_lf = NULL;
union cpt_lf_ctx_flush flush;
+ union cpt_lf_ctx_err err;
bool get_inl_lf = true;
uintptr_t rbase;
struct nix *nix;
@@ -1710,6 +1711,13 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
flush.s.cptr = ((uintptr_t)sa_cptr) >> 7;
plt_write64(flush.u, rbase + CPT_LF_CTX_FLUSH);
+ plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+ /* Read a CSR to ensure that the FLUSH operation is complete */
+ err.u = plt_read64(rbase + CPT_LF_CTX_ERR);
+
+ if (err.s.flush_st_flt)
+ plt_warn("CTX flush could not complete");
return 0;
}
plt_nix_dbg("Could not get CPT LF for CTX write");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.989298502 +0800
+++ 0038-common-cnxk-fix-inline-CTX-write.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 6c3de40af8362d2d7eede3b4fd12075fce964f4d Mon Sep 17 00:00:00 2001
+From 06df4e3ae1eaaabc9d325cd40ab792a75d2ae3ba Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6c3de40af8362d2d7eede3b4fd12075fce964f4d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index a984ac56d9..d0328921a7 100644
+index bc9cc2f429..ba51ddd8c8 100644
@@ -22 +24 @@
-@@ -1748,6 +1748,7 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
+@@ -1669,6 +1669,7 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
@@ -30 +32 @@
-@@ -1789,6 +1790,13 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
+@@ -1710,6 +1711,13 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/cnxk: fix CPT HW word size for outbound SA' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (37 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
` (81 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f7e17fe99ebca664289173ce71eaac65979caf85
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f7e17fe99ebca664289173ce71eaac65979caf85 Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Tue, 1 Oct 2024 11:30:48 +0530
Subject: [PATCH] common/cnxk: fix CPT HW word size for outbound SA
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9587a324f28e84937c9efef534da542c30ff122b ]
Fix the CPT HW word size inited for outbound SA to be
two words.
Fixes: 5ece02e736c3 ("common/cnxk: use common SA init API for default options")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_ie_ot.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/cnxk/roc_ie_ot.c b/drivers/common/cnxk/roc_ie_ot.c
index d0b7ad38f1..356bb8c5a5 100644
--- a/drivers/common/cnxk/roc_ie_ot.c
+++ b/drivers/common/cnxk/roc_ie_ot.c
@@ -38,5 +38,6 @@ roc_ot_ipsec_outb_sa_init(struct roc_ot_ipsec_outb_sa *sa)
offset = offsetof(struct roc_ot_ipsec_outb_sa, ctx);
sa->w0.s.ctx_push_size = (offset / ROC_CTX_UNIT_8B) + 1;
sa->w0.s.ctx_size = ROC_IE_OT_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_SA_CTX_HDR_SIZE;
sa->w0.s.aop_valid = 1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.021158301 +0800
+++ 0039-common-cnxk-fix-CPT-HW-word-size-for-outbound-SA.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 9587a324f28e84937c9efef534da542c30ff122b Mon Sep 17 00:00:00 2001
+From f7e17fe99ebca664289173ce71eaac65979caf85 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9587a324f28e84937c9efef534da542c30ff122b ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 465b2bc1fb..1b436dba72 100644
+index d0b7ad38f1..356bb8c5a5 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cnxk: fix OOP handling for inbound packets' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (38 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
` (80 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=92230f6939d975f98fceaa01cf21af926741e87d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 92230f6939d975f98fceaa01cf21af926741e87d Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:54 +0530
Subject: [PATCH] net/cnxk: fix OOP handling for inbound packets
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d524a5526efa6b4cc01d13d8d50785c08d9b6891 ]
To handle OOP for inbound packet, processing
is done based on NIX_RX_REAS_F flag. However, for the
SKUs that does not support reassembly Inbound Out-Of-Place
processing test case fails because reassembly flag is not
updated in event mode. This patch fixes the same.
Fixes: 5e9e008d0127 ("net/cnxk: support inline ingress out-of-place session")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 10 ++++++++++
drivers/net/cnxk/cnxk_ethdev.h | 4 ++++
drivers/net/cnxk/version.map | 1 +
3 files changed, 15 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 4719f6b863..47822a3d84 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -14,6 +14,13 @@
#include <cnxk_security.h>
#include <roc_priv.h>
+cnxk_ethdev_rx_offload_cb_t cnxk_ethdev_rx_offload_cb;
+void
+cnxk_ethdev_rx_offload_cb_register(cnxk_ethdev_rx_offload_cb_t cb)
+{
+ cnxk_ethdev_rx_offload_cb = cb;
+}
+
static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
{ /* AES GCM */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
@@ -891,6 +898,9 @@ cn10k_eth_sec_session_create(void *device,
!(dev->rx_offload_flags & NIX_RX_REAS_F)) {
dev->rx_offload_flags |= NIX_RX_REAS_F;
cn10k_eth_set_rx_function(eth_dev);
+ if (cnxk_ethdev_rx_offload_cb)
+ cnxk_ethdev_rx_offload_cb(eth_dev->data->port_id,
+ NIX_RX_REAS_F);
}
} else {
struct roc_ot_ipsec_outb_sa *outb_sa, *outb_sa_dptr;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index edbb492e2c..138d206987 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -640,6 +640,10 @@ int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
int cnxk_nix_lookup_mem_metapool_clear(struct cnxk_eth_dev *dev);
__rte_internal
int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);
+typedef void (*cnxk_ethdev_rx_offload_cb_t)(uint16_t port_id, uint64_t flags);
+__rte_internal
+void cnxk_ethdev_rx_offload_cb_register(cnxk_ethdev_rx_offload_cb_t cb);
+
struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
uint32_t spi, bool inb);
struct cnxk_eth_sec_sess *
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index 77f574bb16..078456a9ed 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -16,4 +16,5 @@ EXPERIMENTAL {
INTERNAL {
global:
cnxk_nix_inb_mode_set;
+ cnxk_ethdev_rx_offload_cb_register;
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.052138200 +0800
+++ 0040-net-cnxk-fix-OOP-handling-for-inbound-packets.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From d524a5526efa6b4cc01d13d8d50785c08d9b6891 Mon Sep 17 00:00:00 2001
+From 92230f6939d975f98fceaa01cf21af926741e87d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d524a5526efa6b4cc01d13d8d50785c08d9b6891 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index c9cb540e85..6acab8afa0 100644
+index 4719f6b863..47822a3d84 100644
@@ -26,3 +28,3 @@
-@@ -28,6 +28,13 @@ PLT_STATIC_ASSERT(RTE_PMD_CNXK_AR_WIN_SIZE_MAX == ROC_AR_WIN_SIZE_MAX);
- PLT_STATIC_ASSERT(RTE_PMD_CNXK_LOG_MIN_AR_WIN_SIZE_M1 == ROC_LOG_MIN_AR_WIN_SIZE_M1);
- PLT_STATIC_ASSERT(RTE_PMD_CNXK_AR_WINBITS_SZ == ROC_AR_WINBITS_SZ);
+@@ -14,6 +14,13 @@
+ #include <cnxk_security.h>
+ #include <roc_priv.h>
@@ -40 +42 @@
-@@ -908,6 +915,9 @@ cn10k_eth_sec_session_create(void *device,
+@@ -891,6 +898,9 @@ cn10k_eth_sec_session_create(void *device,
@@ -51 +53 @@
-index d4440b25ac..350adc1161 100644
+index edbb492e2c..138d206987 100644
@@ -54 +56 @@
-@@ -725,6 +725,10 @@ int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
+@@ -640,6 +640,10 @@ int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
@@ -66 +68 @@
-index 099c518ecf..edb0a1c059 100644
+index 77f574bb16..078456a9ed 100644
@@ -69 +71 @@
-@@ -23,4 +23,5 @@ EXPERIMENTAL {
+@@ -16,4 +16,5 @@ EXPERIMENTAL {
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'event/cnxk: fix OOP handling in event mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (39 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
` (79 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=80a61a0a00e593bc6242a9e8f0a210936885c9b3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 80a61a0a00e593bc6242a9e8f0a210936885c9b3 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:55 +0530
Subject: [PATCH] event/cnxk: fix OOP handling in event mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 01a990fe40e827c5f3497f785ce7fd68bff8ef5c ]
Update event device with NIX_RX_REAS_F to handle
out of place processing for SKUs that does not
support reassembly as cn10k driver process OOP
with NIX_RX_REAS_F enabled.
Fixes: 5e9e008d0127 ("net/cnxk: support inline ingress out-of-place session")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9f1d01f048..a44a33eae8 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -783,7 +783,7 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
}
static void
-eventdev_fops_tstamp_update(struct rte_eventdev *event_dev)
+eventdev_fops_update(struct rte_eventdev *event_dev)
{
struct rte_event_fp_ops *fp_op =
rte_event_fp_ops + event_dev->data->dev_id;
@@ -806,7 +806,20 @@ cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
else
evdev->tstamp[port_id] = NULL;
cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
- eventdev_fops_tstamp_update(event_dev);
+ eventdev_fops_update(event_dev);
+}
+
+static void
+cn10k_sso_rx_offload_cb(uint16_t port_id, uint64_t flags)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct cnxk_eth_dev *cnxk_eth_dev = dev->data->dev_private;
+ struct rte_eventdev *event_dev = cnxk_eth_dev->evdev_priv;
+ struct cnxk_sso_evdev *evdev = cnxk_sso_pmd_priv(event_dev);
+
+ evdev->rx_offloads |= flags;
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+ eventdev_fops_update(event_dev);
}
static int
@@ -1116,6 +1129,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
return rc;
}
+ cnxk_ethdev_rx_offload_cb_register(cn10k_sso_rx_offload_cb);
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.082527300 +0800
+++ 0041-event-cnxk-fix-OOP-handling-in-event-mode.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 01a990fe40e827c5f3497f785ce7fd68bff8ef5c Mon Sep 17 00:00:00 2001
+From 80a61a0a00e593bc6242a9e8f0a210936885c9b3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 01a990fe40e827c5f3497f785ce7fd68bff8ef5c ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index c8767a1b2b..531c489172 100644
+index 9f1d01f048..a44a33eae8 100644
@@ -23 +25 @@
-@@ -843,7 +843,7 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+@@ -783,7 +783,7 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
@@ -32 +34 @@
-@@ -866,7 +866,20 @@ cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
+@@ -806,7 +806,20 @@ cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
@@ -54 +56 @@
-@@ -1241,6 +1254,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
+@@ -1116,6 +1129,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/cnxk: fix base log level' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (40 preceding siblings ...)
2024-11-11 6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
` (78 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0dca79e1ec0c120c1059c4b015651e4bf1a5f309
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0dca79e1ec0c120c1059c4b015651e4bf1a5f309 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 14:17:10 +0530
Subject: [PATCH] common/cnxk: fix base log level
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit adc561fc5352bd1f1c8e736a33bb9b03bbb95b3f ]
In a247fcd94598 changeset, the PMD log type is removed and
driver specific log type is added for CNXK.
This patch changes loglevel of CNXK from NOTICE to INFO
to display logs while running applications
Fixes: a247fcd94598 ("drivers: use dedicated log macros instead of PMD logtype")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/common/cnxk/roc_platform.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 80d81742a2..c57dcbe731 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -85,7 +85,7 @@ roc_plt_init(void)
return 0;
}
-RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_base, base, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_base, base, INFO);
RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_mbox, mbox, NOTICE);
RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_cpt, crypto, NOTICE);
RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_ml, ml, NOTICE);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.115108999 +0800
+++ 0042-common-cnxk-fix-base-log-level.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From adc561fc5352bd1f1c8e736a33bb9b03bbb95b3f Mon Sep 17 00:00:00 2001
+From 0dca79e1ec0c120c1059c4b015651e4bf1a5f309 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit adc561fc5352bd1f1c8e736a33bb9b03bbb95b3f ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 30379c7e5e..f1e0a93d97 100644
+index 80d81742a2..c57dcbe731 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/cnxk: fix IRQ reconfiguration' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (41 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
` (77 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e1e6e73a44dcd1e62731ed963bdd3b15fe1b7463
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e1e6e73a44dcd1e62731ed963bdd3b15fe1b7463 Mon Sep 17 00:00:00 2001
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date: Tue, 1 Oct 2024 18:41:09 +0530
Subject: [PATCH] common/cnxk: fix IRQ reconfiguration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 758b58f06a43564f435e3ecc1a8af994564a6b6b ]
Unregister SSO device and NPA IRQs before resizing
IRQs to cleanup stale IRQ handles.
Fixes: 993107f0f440 ("common/cnxk: limit SSO interrupt allocation count")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_dev.c | 16 +++++++---------
drivers/common/cnxk/roc_dev_priv.h | 2 ++
drivers/common/cnxk/roc_sso.c | 7 +++++++
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 35eb8b7628..793d78fdbc 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -947,8 +947,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
RVU_VF_INT_VEC_MBOX);
}
-static void
-mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+void
+dev_mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
if (dev_is_vf(dev))
mbox_unregister_vf_irq(pci_dev, dev);
@@ -1026,8 +1026,8 @@ roc_pf_vf_flr_irq(void *param)
}
}
-static int
-vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
+void
+dev_vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
@@ -1043,8 +1043,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev,
RVU_PF_INT_VEC_VFFLR1);
-
- return 0;
}
int
@@ -1529,7 +1527,7 @@ thread_fail:
iounmap:
dev_vf_mbase_put(pci_dev, vf_mbase);
mbox_unregister:
- mbox_unregister_irq(pci_dev, dev);
+ dev_mbox_unregister_irq(pci_dev, dev);
if (dev->ops)
plt_free(dev->ops);
mbox_fini:
@@ -1565,10 +1563,10 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
if (dev->lmt_mz)
plt_memzone_free(dev->lmt_mz);
- mbox_unregister_irq(pci_dev, dev);
+ dev_mbox_unregister_irq(pci_dev, dev);
if (!dev_is_vf(dev))
- vf_flr_unregister_irqs(pci_dev, dev);
+ dev_vf_flr_unregister_irqs(pci_dev, dev);
/* Release PF - VF */
mbox = &dev->mbox_vfpf;
if (mbox->hwbase && mbox->dev)
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
index 5b2c5096f8..f1fa498dc1 100644
--- a/drivers/common/cnxk/roc_dev_priv.h
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -128,6 +128,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle);
int dev_irq_reconfigure(struct plt_intr_handle *intr_handle, uint16_t max_intr);
int dev_mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev);
+void dev_mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev);
int dev_vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev);
+void dev_vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev);
#endif /* _ROC_DEV_PRIV_H */
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index b02c9c7f38..14cdf14554 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -765,7 +765,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, uint16_t sso_vec_cnt)
return dev_irq_reconfigure(pci_dev->intr_handle, mbox_vec_cnt + npa_vec_cnt);
}
+ /* Before re-configuring unregister irqs */
npa_vec_cnt = (dev->npa.pci_dev == pci_dev) ? NPA_LF_INT_VEC_POISON + 1 : 0;
+ if (npa_vec_cnt)
+ npa_unregister_irqs(&dev->npa);
+
+ dev_mbox_unregister_irq(pci_dev, dev);
+ if (!dev_is_vf(dev))
+ dev_vf_flr_unregister_irqs(pci_dev, dev);
/* Re-configure to include SSO vectors */
rc = dev_irq_reconfigure(pci_dev->intr_handle, mbox_vec_cnt + npa_vec_cnt + sso_vec_cnt);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.144644499 +0800
+++ 0043-common-cnxk-fix-IRQ-reconfiguration.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 758b58f06a43564f435e3ecc1a8af994564a6b6b Mon Sep 17 00:00:00 2001
+From e1e6e73a44dcd1e62731ed963bdd3b15fe1b7463 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 758b58f06a43564f435e3ecc1a8af994564a6b6b ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 26aa35894b..c905d35ea6 100644
+index 35eb8b7628..793d78fdbc 100644
@@ -23,2 +25,2 @@
-@@ -1047,8 +1047,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
- dev_irq_unregister(intr_handle, roc_pf_vf_mbox_irq, dev, RVU_VF_INT_VEC_MBOX);
+@@ -947,8 +947,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+ RVU_VF_INT_VEC_MBOX);
@@ -34 +36 @@
-@@ -1126,8 +1126,8 @@ roc_pf_vf_flr_irq(void *param)
+@@ -1026,8 +1026,8 @@ roc_pf_vf_flr_irq(void *param)
@@ -45 +47 @@
-@@ -1143,8 +1143,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
+@@ -1043,8 +1043,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
@@ -54 +56 @@
-@@ -1723,7 +1721,7 @@ thread_fail:
+@@ -1529,7 +1527,7 @@ thread_fail:
@@ -63 +65 @@
-@@ -1761,10 +1759,10 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
+@@ -1565,10 +1563,10 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
@@ -70 +72 @@
- if (!dev_is_vf(dev)) {
+ if (!dev_is_vf(dev))
@@ -73,3 +75,3 @@
- /* Releasing memory allocated for mbox region */
- if (dev->vf_mbox_mz)
- plt_memzone_free(dev->vf_mbox_mz);
+ /* Release PF - VF */
+ mbox = &dev->mbox_vfpf;
+ if (mbox->hwbase && mbox->dev)
@@ -77 +79 @@
-index 434e165b56..5ab4f72f8f 100644
+index 5b2c5096f8..f1fa498dc1 100644
@@ -80 +82 @@
-@@ -170,6 +170,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle);
+@@ -128,6 +128,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle);
@@ -90 +92 @@
-index 499f93e373..2e3b134bfc 100644
+index b02c9c7f38..14cdf14554 100644
@@ -93 +95 @@
-@@ -842,7 +842,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, uint16_t sso_vec_cnt)
+@@ -765,7 +765,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, uint16_t sso_vec_cnt)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'baseband/acc: fix access to deallocated mem' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (42 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
` (76 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Hernan Vargas; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=25491476c647f6dac991948667d804e334b39aa9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 25491476c647f6dac991948667d804e334b39aa9 Mon Sep 17 00:00:00 2001
From: Hernan Vargas <hernan.vargas@intel.com>
Date: Wed, 9 Oct 2024 14:12:51 -0700
Subject: [PATCH] baseband/acc: fix access to deallocated mem
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a090b8ffe73ed21d54e17e5d5711d2e817d7229e ]
Prevent op_addr access during queue_stop operation, as this memory may
have been deallocated.
Fixes: e640f6cdfa84 ("baseband/acc200: add LDPC processing")
Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/baseband/acc/rte_acc100_pmd.c | 36 ----------------------
drivers/baseband/acc/rte_vrb_pmd.c | 44 +--------------------------
2 files changed, 1 insertion(+), 79 deletions(-)
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index 9d028f0f48..3e135c480d 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -838,51 +838,15 @@ free_q:
return ret;
}
-static inline void
-acc100_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
- uint16_t index)
-{
- if (op == NULL)
- return;
- if (op_type == RTE_BBDEV_OP_LDPC_DEC)
- rte_bbdev_log(DEBUG,
- " Op 5GUL %d %d %d %d %d %d %d %d %d %d %d %d",
- index,
- op->ldpc_dec.basegraph, op->ldpc_dec.z_c,
- op->ldpc_dec.n_cb, op->ldpc_dec.q_m,
- op->ldpc_dec.n_filler, op->ldpc_dec.cb_params.e,
- op->ldpc_dec.op_flags, op->ldpc_dec.rv_index,
- op->ldpc_dec.iter_max, op->ldpc_dec.iter_count,
- op->ldpc_dec.harq_combined_input.length
- );
- else if (op_type == RTE_BBDEV_OP_LDPC_ENC) {
- struct rte_bbdev_enc_op *op_dl = (struct rte_bbdev_enc_op *) op;
- rte_bbdev_log(DEBUG,
- " Op 5GDL %d %d %d %d %d %d %d %d %d",
- index,
- op_dl->ldpc_enc.basegraph, op_dl->ldpc_enc.z_c,
- op_dl->ldpc_enc.n_cb, op_dl->ldpc_enc.q_m,
- op_dl->ldpc_enc.n_filler, op_dl->ldpc_enc.cb_params.e,
- op_dl->ldpc_enc.op_flags, op_dl->ldpc_enc.rv_index
- );
- }
-}
-
static int
acc100_queue_stop(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc_queue *q;
- struct rte_bbdev_dec_op *op;
- uint16_t i;
q = dev->data->queues[queue_id].queue_private;
rte_bbdev_log(INFO, "Queue Stop %d H/T/D %d %d %x OpType %d",
queue_id, q->sw_ring_head, q->sw_ring_tail,
q->sw_ring_depth, q->op_type);
- for (i = 0; i < q->sw_ring_depth; ++i) {
- op = (q->ring_addr + i)->req.op_addr;
- acc100_print_op(op, q->op_type, i);
- }
/* ignore all operations in flight and clear counters */
q->sw_ring_tail = q->sw_ring_head;
q->aq_enqueued = 0;
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 88e1d03ebf..2ff3f313cb 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1047,58 +1047,16 @@ free_q:
return ret;
}
-static inline void
-vrb_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
- uint16_t index)
-{
- if (op == NULL)
- return;
- if (op_type == RTE_BBDEV_OP_LDPC_DEC)
- rte_bbdev_log(INFO,
- " Op 5GUL %d %d %d %d %d %d %d %d %d %d %d %d",
- index,
- op->ldpc_dec.basegraph, op->ldpc_dec.z_c,
- op->ldpc_dec.n_cb, op->ldpc_dec.q_m,
- op->ldpc_dec.n_filler, op->ldpc_dec.cb_params.e,
- op->ldpc_dec.op_flags, op->ldpc_dec.rv_index,
- op->ldpc_dec.iter_max, op->ldpc_dec.iter_count,
- op->ldpc_dec.harq_combined_input.length
- );
- else if (op_type == RTE_BBDEV_OP_LDPC_ENC) {
- struct rte_bbdev_enc_op *op_dl = (struct rte_bbdev_enc_op *) op;
- rte_bbdev_log(INFO,
- " Op 5GDL %d %d %d %d %d %d %d %d %d",
- index,
- op_dl->ldpc_enc.basegraph, op_dl->ldpc_enc.z_c,
- op_dl->ldpc_enc.n_cb, op_dl->ldpc_enc.q_m,
- op_dl->ldpc_enc.n_filler, op_dl->ldpc_enc.cb_params.e,
- op_dl->ldpc_enc.op_flags, op_dl->ldpc_enc.rv_index
- );
- } else if (op_type == RTE_BBDEV_OP_MLDTS) {
- struct rte_bbdev_mldts_op *op_mldts = (struct rte_bbdev_mldts_op *) op;
- rte_bbdev_log(INFO, " Op MLD %d RBs %d NL %d Rp %d %d %x",
- index,
- op_mldts->mldts.num_rbs, op_mldts->mldts.num_layers,
- op_mldts->mldts.r_rep,
- op_mldts->mldts.c_rep, op_mldts->mldts.op_flags);
- }
-}
-
/* Stop queue and clear counters. */
static int
vrb_queue_stop(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc_queue *q;
- struct rte_bbdev_dec_op *op;
- uint16_t i;
+
q = dev->data->queues[queue_id].queue_private;
rte_bbdev_log(INFO, "Queue Stop %d H/T/D %d %d %x OpType %d",
queue_id, q->sw_ring_head, q->sw_ring_tail,
q->sw_ring_depth, q->op_type);
- for (i = 0; i < q->sw_ring_depth; ++i) {
- op = (q->ring_addr + i)->req.op_addr;
- vrb_print_op(op, q->op_type, i);
- }
/* ignore all operations in flight and clear counters */
q->sw_ring_tail = q->sw_ring_head;
q->aq_enqueued = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.188469798 +0800
+++ 0044-baseband-acc-fix-access-to-deallocated-mem.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From a090b8ffe73ed21d54e17e5d5711d2e817d7229e Mon Sep 17 00:00:00 2001
+From 25491476c647f6dac991948667d804e334b39aa9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a090b8ffe73ed21d54e17e5d5711d2e817d7229e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 5e6ee85e13..c690d1492b 100644
+index 9d028f0f48..3e135c480d 100644
@@ -76 +78 @@
-index 646c12ad5c..e3f98d6e42 100644
+index 88e1d03ebf..2ff3f313cb 100644
@@ -79 +81 @@
-@@ -1048,58 +1048,16 @@ free_q:
+@@ -1047,58 +1047,16 @@ free_q:
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'baseband/acc: fix soft output bypass RM' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (43 preceding siblings ...)
2024-11-11 6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
` (75 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Hernan Vargas; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9a9eee381e03a93ff8c0380d2d6e5aa5125a06e9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9a9eee381e03a93ff8c0380d2d6e5aa5125a06e9 Mon Sep 17 00:00:00 2001
From: Hernan Vargas <hernan.vargas@intel.com>
Date: Wed, 9 Oct 2024 14:12:52 -0700
Subject: [PATCH] baseband/acc: fix soft output bypass RM
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2fd167b61bc6c6f40a6c04085caa56be40451e2a ]
Removing soft output bypass RM capability due to VRB2 device
limitations.
Fixes: b49fe052f9cd ("baseband/acc: add FEC capabilities for VRB2 variant")
Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/baseband/acc/rte_vrb_pmd.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 2ff3f313cb..4979bb8cec 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1270,7 +1270,6 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
RTE_BBDEV_LDPC_HARQ_4BIT_COMPRESSION |
RTE_BBDEV_LDPC_LLR_COMPRESSION |
RTE_BBDEV_LDPC_SOFT_OUT_ENABLE |
- RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS |
RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS |
RTE_BBDEV_LDPC_DEC_INTERRUPTS,
.llr_size = 8,
@@ -1584,18 +1583,18 @@ vrb_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
fcw->so_en = check_bit(op->ldpc_dec.op_flags, RTE_BBDEV_LDPC_SOFT_OUT_ENABLE);
fcw->so_bypass_intlv = check_bit(op->ldpc_dec.op_flags,
RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS);
- fcw->so_bypass_rm = check_bit(op->ldpc_dec.op_flags,
- RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS);
+ fcw->so_bypass_rm = 0;
fcw->minsum_offset = 1;
fcw->dec_llrclip = 2;
}
/*
- * These are all implicitly set
+ * These are all implicitly set:
* fcw->synd_post = 0;
* fcw->dec_convllr = 0;
* fcw->hcout_convllr = 0;
* fcw->hcout_size1 = 0;
+ * fcw->so_it = 0;
* fcw->hcout_offset = 0;
* fcw->negstop_th = 0;
* fcw->negstop_it = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.246393997 +0800
+++ 0045-baseband-acc-fix-soft-output-bypass-RM.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From 2fd167b61bc6c6f40a6c04085caa56be40451e2a Mon Sep 17 00:00:00 2001
+From 9a9eee381e03a93ff8c0380d2d6e5aa5125a06e9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2fd167b61bc6c6f40a6c04085caa56be40451e2a ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index e3f98d6e42..52a683e4e4 100644
+index 2ff3f313cb..4979bb8cec 100644
@@ -22 +24 @@
-@@ -1272,7 +1272,6 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
+@@ -1270,7 +1270,6 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
@@ -30 +32 @@
-@@ -1643,18 +1642,18 @@ vrb_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
+@@ -1584,18 +1583,18 @@ vrb_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'vhost: fix offset while mapping log base address' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (44 preceding siblings ...)
2024-11-11 6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
` (74 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bill Xiang; +Cc: xuemingl, Chenbo Xia, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1f2330c670d1bedb5e54a790eda3874d402ab88d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1f2330c670d1bedb5e54a790eda3874d402ab88d Mon Sep 17 00:00:00 2001
From: Bill Xiang <xiangwencheng@dayudpu.com>
Date: Mon, 8 Jul 2024 14:57:49 +0800
Subject: [PATCH] vhost: fix offset while mapping log base address
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit bdd96d8ac76ca412165b2d1bbd3701e978246d8e ]
For sanity the offset should be the last parameter of mmap.
Fixes: fbc4d248b198 ("vhost: fix offset while mmaping log base address")
Signed-off-by: Bill Xiang <xiangwencheng@dayudpu.com>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
---
.mailmap | 1 +
lib/vhost/vhost_user.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 4022645615..f4d2a72009 100644
--- a/.mailmap
+++ b/.mailmap
@@ -177,6 +177,7 @@ Bert van Leeuwen <bert.vanleeuwen@netronome.com>
Bhagyada Modali <bhagyada.modali@amd.com>
Bharat Mota <bmota@vmware.com>
Bill Hong <bhong@brocade.com>
+Bill Xiang <xiangwencheng@dayudpu.com>
Billy McFall <bmcfall@redhat.com>
Billy O'Mahony <billy.o.mahony@intel.com>
Bing Zhao <bingz@nvidia.com> <bingz@mellanox.com> <bing.zhao@hxt-semitech.com>
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index f8e42dd619..5b6c90437c 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -2328,7 +2328,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
* mmap from 0 to workaround a hugepage mmap bug: mmap will
* fail when offset is not page size aligned.
*/
- addr = mmap(0, size + off, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ addr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, off);
alignment = get_blk_size(fd);
close(fd);
if (addr == MAP_FAILED) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.286278496 +0800
+++ 0046-vhost-fix-offset-while-mapping-log-base-address.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From bdd96d8ac76ca412165b2d1bbd3701e978246d8e Mon Sep 17 00:00:00 2001
+From 1f2330c670d1bedb5e54a790eda3874d402ab88d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit bdd96d8ac76ca412165b2d1bbd3701e978246d8e ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index ed4ea17c4c..544e62df7d 100644
+index 4022645615..f4d2a72009 100644
@@ -22,3 +24,3 @@
-@@ -183,6 +183,7 @@ Bhagyada Modali <bhagyada.modali@amd.com>
- Bharat Mota <bharat.mota@broadcom.com> <bmota@vmware.com>
- Bhuvan Mital <bhuvan.mital@amd.com>
+@@ -177,6 +177,7 @@ Bert van Leeuwen <bert.vanleeuwen@netronome.com>
+ Bhagyada Modali <bhagyada.modali@amd.com>
+ Bharat Mota <bmota@vmware.com>
@@ -31 +33 @@
-index 5f470da38a..0893ae80bb 100644
+index f8e42dd619..5b6c90437c 100644
@@ -34 +36 @@
-@@ -2399,7 +2399,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
+@@ -2328,7 +2328,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'vdpa: update used flags in used ring relay' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (45 preceding siblings ...)
2024-11-11 6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
` (73 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bill Xiang; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=23b4bf7939c8bc38ed6d132c27cdb6905a9acaf7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 23b4bf7939c8bc38ed6d132c27cdb6905a9acaf7 Mon Sep 17 00:00:00 2001
From: Bill Xiang <xiangwencheng@dayudpu.com>
Date: Wed, 17 Jul 2024 11:24:47 +0800
Subject: [PATCH] vdpa: update used flags in used ring relay
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b3f923fe1710e448c073f03aad2c087ffb6c7a5c ]
The vDPA device will work incorrectly if flags such as
VRING_USED_F_NO_NOTIFY are not updated correctly.
Fixes: b13ad2decc83 ("vhost: provide helpers for virtio ring relay")
Signed-off-by: Bill Xiang <xiangwencheng@dayudpu.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vdpa.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index ce4fb09859..f9730d0685 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -174,6 +174,7 @@ rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
idx = vq->used->idx;
idx_m = s_vring->used->idx;
ret = (uint16_t)(idx_m - idx);
+ vq->used->flags = s_vring->used->flags;
while (idx != idx_m) {
/* copy used entry, used ring logging is not covered here */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.331825395 +0800
+++ 0047-vdpa-update-used-flags-in-used-ring-relay.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From b3f923fe1710e448c073f03aad2c087ffb6c7a5c Mon Sep 17 00:00:00 2001
+From 23b4bf7939c8bc38ed6d132c27cdb6905a9acaf7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b3f923fe1710e448c073f03aad2c087ffb6c7a5c ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index a1dd5a753b..8abb073675 100644
+index ce4fb09859..f9730d0685 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'vdpa/nfp: fix hardware initialization' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (46 preceding siblings ...)
2024-11-11 6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
` (72 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Xinying Yu
Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d0688a90529e682fd211f991e7a0b7ceb899993e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d0688a90529e682fd211f991e7a0b7ceb899993e Mon Sep 17 00:00:00 2001
From: Xinying Yu <xinying.yu@corigine.com>
Date: Mon, 5 Aug 2024 10:12:39 +0800
Subject: [PATCH] vdpa/nfp: fix hardware initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fc470d5e88f848957b8f6d2089210254525e9e13 ]
Reconfigure the NIC will fail because lack of the
initialization logic of queue configuration pointer.
Fix this by adding the correct initialization logic.
Fixes: d89f4990c14e ("vdpa/nfp: add hardware init")
Signed-off-by: Xinying Yu <xinying.yu@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
.mailmap | 1 +
drivers/vdpa/nfp/nfp_vdpa_core.c | 9 +++++++++
2 files changed, 10 insertions(+)
diff --git a/.mailmap b/.mailmap
index f4d2a72009..a72dce1a61 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1601,6 +1601,7 @@ Xieming Katty <katty.xieming@huawei.com>
Xinfeng Zhao <xinfengx.zhao@intel.com>
Xingguang He <xingguang.he@intel.com>
Xingyou Chen <niatlantice@gmail.com>
+Xinying Yu <xinying.yu@corigine.com>
Xin Long <longxin.xl@alibaba-inc.com>
Xi Zhang <xix.zhang@intel.com>
Xuan Ding <xuan.ding@intel.com>
diff --git a/drivers/vdpa/nfp/nfp_vdpa_core.c b/drivers/vdpa/nfp/nfp_vdpa_core.c
index 7b877605e4..291798196c 100644
--- a/drivers/vdpa/nfp/nfp_vdpa_core.c
+++ b/drivers/vdpa/nfp/nfp_vdpa_core.c
@@ -55,7 +55,10 @@ nfp_vdpa_hw_init(struct nfp_vdpa_hw *vdpa_hw,
struct rte_pci_device *pci_dev)
{
uint32_t queue;
+ uint8_t *tx_bar;
+ uint32_t start_q;
struct nfp_hw *hw;
+ uint32_t tx_bar_off;
uint8_t *notify_base;
hw = &vdpa_hw->super;
@@ -82,6 +85,12 @@ nfp_vdpa_hw_init(struct nfp_vdpa_hw *vdpa_hw,
idx + 1, vdpa_hw->notify_addr[idx + 1]);
}
+ /* NFP vDPA cfg queue setup */
+ start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
+ tx_bar_off = start_q * NFP_QCP_QUEUE_ADDR_SZ;
+ tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
+ hw->qcp_cfg = tx_bar + NFP_QCP_QUEUE_ADDR_SZ;
+
vdpa_hw->features = (1ULL << VIRTIO_F_VERSION_1) |
(1ULL << VIRTIO_F_IN_ORDER) |
(1ULL << VHOST_USER_F_PROTOCOL_FEATURES);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.394025494 +0800
+++ 0048-vdpa-nfp-fix-hardware-initialization.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From fc470d5e88f848957b8f6d2089210254525e9e13 Mon Sep 17 00:00:00 2001
+From d0688a90529e682fd211f991e7a0b7ceb899993e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fc470d5e88f848957b8f6d2089210254525e9e13 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 544e62df7d..f51b1dda5d 100644
+index f4d2a72009..a72dce1a61 100644
@@ -27 +29 @@
-@@ -1655,6 +1655,7 @@ Xieming Katty <katty.xieming@huawei.com>
+@@ -1601,6 +1601,7 @@ Xieming Katty <katty.xieming@huawei.com>
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'vdpa/nfp: fix reconfiguration' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (47 preceding siblings ...)
2024-11-11 6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
` (71 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Xinying Yu
Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a46c4c1b435b99d9b5a3a2a9867b452cf42abd49
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a46c4c1b435b99d9b5a3a2a9867b452cf42abd49 Mon Sep 17 00:00:00 2001
From: Xinying Yu <xinying.yu@corigine.com>
Date: Mon, 5 Aug 2024 10:12:40 +0800
Subject: [PATCH] vdpa/nfp: fix reconfiguration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d149827203a61da4c8c9e4a13e07bb0260438124 ]
The ctrl words of vDPA is located on the extended word, so it
should use the 'nfp_ext_reconfig()' rather than 'nfp_reconfig()'.
Also replace the misuse of 'NFP_NET_CFG_CTRL_SCATTER' macro
with 'NFP_NET_CFG_CTRL_VIRTIO'.
Fixes: b47a0373903f ("vdpa/nfp: add datapath update")
Signed-off-by: Xinying Yu <xinying.yu@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/common/nfp/nfp_common_ctrl.h | 1 +
drivers/vdpa/nfp/nfp_vdpa_core.c | 16 ++++++++++++----
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/drivers/common/nfp/nfp_common_ctrl.h b/drivers/common/nfp/nfp_common_ctrl.h
index d09fd2b892..532bc6584a 100644
--- a/drivers/common/nfp/nfp_common_ctrl.h
+++ b/drivers/common/nfp/nfp_common_ctrl.h
@@ -223,6 +223,7 @@ struct nfp_net_fw_ver {
#define NFP_NET_CFG_CTRL_IPSEC_SM_LOOKUP (0x1 << 3) /**< SA short match lookup */
#define NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP (0x1 << 4) /**< SA long match lookup */
#define NFP_NET_CFG_CTRL_MULTI_PF (0x1 << 5)
+#define NFP_NET_CFG_CTRL_VIRTIO (0x1 << 10) /**< Virtio offload */
#define NFP_NET_CFG_CTRL_IN_ORDER (0x1 << 11) /**< Virtio in-order flag */
#define NFP_NET_CFG_CAP_WORD1 0x00a4
diff --git a/drivers/vdpa/nfp/nfp_vdpa_core.c b/drivers/vdpa/nfp/nfp_vdpa_core.c
index 291798196c..6d07356581 100644
--- a/drivers/vdpa/nfp/nfp_vdpa_core.c
+++ b/drivers/vdpa/nfp/nfp_vdpa_core.c
@@ -101,7 +101,7 @@ nfp_vdpa_hw_init(struct nfp_vdpa_hw *vdpa_hw,
static uint32_t
nfp_vdpa_check_offloads(void)
{
- return NFP_NET_CFG_CTRL_SCATTER |
+ return NFP_NET_CFG_CTRL_VIRTIO |
NFP_NET_CFG_CTRL_IN_ORDER;
}
@@ -112,6 +112,7 @@ nfp_vdpa_hw_start(struct nfp_vdpa_hw *vdpa_hw,
int ret;
uint32_t update;
uint32_t new_ctrl;
+ uint32_t new_ext_ctrl;
struct timespec wait_tst;
struct nfp_hw *hw = &vdpa_hw->super;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
@@ -131,8 +132,6 @@ nfp_vdpa_hw_start(struct nfp_vdpa_hw *vdpa_hw,
nfp_disable_queues(hw);
nfp_enable_queues(hw, NFP_VDPA_MAX_QUEUES, NFP_VDPA_MAX_QUEUES);
- new_ctrl = nfp_vdpa_check_offloads();
-
nn_cfg_writel(hw, NFP_NET_CFG_MTU, 9216);
nn_cfg_writel(hw, NFP_NET_CFG_FLBUFSZ, 10240);
@@ -147,8 +146,17 @@ nfp_vdpa_hw_start(struct nfp_vdpa_hw *vdpa_hw,
/* Writing new MAC to the specific port BAR address */
nfp_write_mac(hw, (uint8_t *)mac_addr);
+ new_ext_ctrl = nfp_vdpa_check_offloads();
+
+ update = NFP_NET_CFG_UPDATE_GEN;
+ ret = nfp_ext_reconfig(hw, new_ext_ctrl, update);
+ if (ret != 0)
+ return -EIO;
+
+ hw->ctrl_ext = new_ext_ctrl;
+
/* Enable device */
- new_ctrl |= NFP_NET_CFG_CTRL_ENABLE;
+ new_ctrl = NFP_NET_CFG_CTRL_ENABLE;
/* Signal the NIC about the change */
update = NFP_NET_CFG_UPDATE_MACADDR |
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.484732592 +0800
+++ 0049-vdpa-nfp-fix-reconfiguration.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From d149827203a61da4c8c9e4a13e07bb0260438124 Mon Sep 17 00:00:00 2001
+From a46c4c1b435b99d9b5a3a2a9867b452cf42abd49 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d149827203a61da4c8c9e4a13e07bb0260438124 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 69596dd6f5..1b30f81fdb 100644
+index d09fd2b892..532bc6584a 100644
@@ -29 +31,2 @@
-@@ -205,6 +205,7 @@ struct nfp_net_fw_ver {
+@@ -223,6 +223,7 @@ struct nfp_net_fw_ver {
+ #define NFP_NET_CFG_CTRL_IPSEC_SM_LOOKUP (0x1 << 3) /**< SA short match lookup */
@@ -32 +34,0 @@
- #define NFP_NET_CFG_CTRL_FLOW_STEER (0x1 << 8) /**< Flow Steering */
@@ -35 +36,0 @@
- #define NFP_NET_CFG_CTRL_USO (0x1 << 16) /**< UDP segmentation offload */
@@ -36,0 +38 @@
+ #define NFP_NET_CFG_CAP_WORD1 0x00a4
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/virtio-user: reset used index counter' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (48 preceding siblings ...)
2024-11-11 6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
` (70 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Kommula Shiva Shankar; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9c24c7b819f29a9f8962df3e2a7742d1dfee98a4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9c24c7b819f29a9f8962df3e2a7742d1dfee98a4 Mon Sep 17 00:00:00 2001
From: Kommula Shiva Shankar <kshankar@marvell.com>
Date: Mon, 5 Aug 2024 10:08:41 +0000
Subject: [PATCH] net/virtio-user: reset used index counter
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ff11fc60c5d8d9ae5a0f0114db4c3bc834090548 ]
When the virtio device is reinitialized during ethdev reconfiguration,
all the virtio rings are recreated and repopulated on the device.
Accordingly, reset the used index counter value back to zero.
Fixes: 48a4464029a7 ("net/virtio-user: support control VQ for packed")
Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 3a31642899..f176df86d4 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -199,6 +199,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
vring->device = (void *)(uintptr_t)used_addr;
dev->packed_queues[queue_idx].avail_wrap_counter = true;
dev->packed_queues[queue_idx].used_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_idx = 0;
for (i = 0; i < vring->num; i++)
vring->desc[i].flags = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.533567591 +0800
+++ 0050-net-virtio-user-reset-used-index-counter.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From ff11fc60c5d8d9ae5a0f0114db4c3bc834090548 Mon Sep 17 00:00:00 2001
+From 9c24c7b819f29a9f8962df3e2a7742d1dfee98a4 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ff11fc60c5d8d9ae5a0f0114db4c3bc834090548 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index bf29f0dacd..747dddeb2e 100644
+index 3a31642899..f176df86d4 100644
@@ -23 +25 @@
-@@ -204,6 +204,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
+@@ -199,6 +199,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'vhost: restrict set max queue pair API to VDUSE' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (49 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
` (69 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: xuemingl, Yu Jiang, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e1bd966815e01d4bbac59b413ec35e5ebd0a416c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e1bd966815e01d4bbac59b413ec35e5ebd0a416c Mon Sep 17 00:00:00 2001
From: Maxime Coquelin <maxime.coquelin@redhat.com>
Date: Thu, 3 Oct 2024 10:11:10 +0200
Subject: [PATCH] vhost: restrict set max queue pair API to VDUSE
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e1808999d36bb2e136a649f4651f36030aa468f1 ]
In order to avoid breaking Vhost-user live-migration, we want the
rte_vhost_driver_set_max_queue_num API to only be effective with
VDUSE.
Furthermore, this API is only really needed for VDUSE where the
device number of queues is defined by the backend. For Vhost-user,
this is defined by the frontend (e.g. QEMU), so the advantages of
restricting more the maximum number of queue pairs is limited to
a small memory gain (a handful of pointers).
Fixes: 4aa1f88ac13d ("vhost: add API to set max queue pairs")
Reported-by: Yu Jiang <yux.jiang@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
---
lib/vhost/rte_vhost.h | 2 ++
lib/vhost/socket.c | 11 +++++++++++
2 files changed, 13 insertions(+)
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index db92f05344..c6dba67a67 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -613,6 +613,8 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num);
* @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
*
* Set the maximum number of queue pairs supported by the device.
+ * The value set is ignored for Vhost-user backends. It is only taken into
+ * account with VDUSE backends.
*
* @param path
* The vhost-user socket file path
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 0b95c54c5b..ffb8518e74 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -865,6 +865,17 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
goto unlock_exit;
}
+ /*
+ * This is only useful for VDUSE for which number of virtqueues is set
+ * by the backend. For Vhost-user, the number of virtqueues is defined
+ * by the frontend.
+ */
+ if (!vsocket->is_vduse) {
+ VHOST_LOG_CONFIG(path, DEBUG, "Keeping %u max queue pairs for Vhost-user backend\n",
+ VHOST_MAX_QUEUE_PAIRS);
+ goto unlock_exit;
+ }
+
vsocket->max_queue_pairs = max_queue_pairs;
unlock_exit:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.569998190 +0800
+++ 0051-vhost-restrict-set-max-queue-pair-API-to-VDUSE.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From e1808999d36bb2e136a649f4651f36030aa468f1 Mon Sep 17 00:00:00 2001
+From e1bd966815e01d4bbac59b413ec35e5ebd0a416c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e1808999d36bb2e136a649f4651f36030aa468f1 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -24,2 +26,2 @@
- lib/vhost/socket.c | 12 ++++++++++++
- 2 files changed, 14 insertions(+)
+ lib/vhost/socket.c | 11 +++++++++++
+ 2 files changed, 13 insertions(+)
@@ -28 +30 @@
-index c7a5f56df8..1a91a00f02 100644
+index db92f05344..c6dba67a67 100644
@@ -31 +33 @@
-@@ -614,6 +614,8 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num);
+@@ -613,6 +613,8 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num);
@@ -41 +43 @@
-index a75728a2e4..d29d15494c 100644
+index 0b95c54c5b..ffb8518e74 100644
@@ -44 +46 @@
-@@ -860,6 +860,18 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
+@@ -865,6 +865,17 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
@@ -54,3 +56,2 @@
-+ VHOST_CONFIG_LOG(path, DEBUG,
-+ "Keeping %u max queue pairs for Vhost-user backend",
-+ VHOST_MAX_QUEUE_PAIRS);
++ VHOST_LOG_CONFIG(path, DEBUG, "Keeping %u max queue pairs for Vhost-user backend\n",
++ VHOST_MAX_QUEUE_PAIRS);
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'fib: fix AVX512 lookup' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (50 preceding siblings ...)
2024-11-11 6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
` (68 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c098add6c3bd7d23d304d79c1ce8c3e76368c622
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c098add6c3bd7d23d304d79c1ce8c3e76368c622 Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Fri, 6 Sep 2024 17:04:36 +0000
Subject: [PATCH] fib: fix AVX512 lookup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 66ed1786ad067198814e9b2ab54f0cad68a58f1e ]
Vector lookup uses gather instructions which loads data in 4byte chunks.
This could lead to out of bounds access at the end of the tbl24 in case
of 1 or 2 byte entries if e.g. lookup is attempted for 255.255.255.255
in IPv4 case.
This patch fixes potential out of bound access by gather instruction
allocating an extra 4 byte in the end of the tbl24.
Fixes: b3509fa3653e ("fib: add AVX512 lookup")
Fixes: 1e5630e40d95 ("fib6: add AVX512 lookup")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
lib/fib/dir24_8.c | 4 ++--
lib/fib/trie.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/fib/dir24_8.c b/lib/fib/dir24_8.c
index c739e92304..07c324743b 100644
--- a/lib/fib/dir24_8.c
+++ b/lib/fib/dir24_8.c
@@ -526,8 +526,8 @@ dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *fib_conf)
snprintf(mem_name, sizeof(mem_name), "DP_%s", name);
dp = rte_zmalloc_socket(name, sizeof(struct dir24_8_tbl) +
- DIR24_8_TBL24_NUM_ENT * (1 << nh_sz), RTE_CACHE_LINE_SIZE,
- socket_id);
+ DIR24_8_TBL24_NUM_ENT * (1 << nh_sz) + sizeof(uint32_t),
+ RTE_CACHE_LINE_SIZE, socket_id);
if (dp == NULL) {
rte_errno = ENOMEM;
return NULL;
diff --git a/lib/fib/trie.c b/lib/fib/trie.c
index 7b33cdaa7b..ca1c2fe3bc 100644
--- a/lib/fib/trie.c
+++ b/lib/fib/trie.c
@@ -647,8 +647,8 @@ trie_create(const char *name, int socket_id,
snprintf(mem_name, sizeof(mem_name), "DP_%s", name);
dp = rte_zmalloc_socket(name, sizeof(struct rte_trie_tbl) +
- TRIE_TBL24_NUM_ENT * (1 << nh_sz), RTE_CACHE_LINE_SIZE,
- socket_id);
+ TRIE_TBL24_NUM_ENT * (1 << nh_sz) + sizeof(uint32_t),
+ RTE_CACHE_LINE_SIZE, socket_id);
if (dp == NULL) {
rte_errno = ENOMEM;
return dp;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.607431989 +0800
+++ 0052-fib-fix-AVX512-lookup.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From 66ed1786ad067198814e9b2ab54f0cad68a58f1e Mon Sep 17 00:00:00 2001
+From c098add6c3bd7d23d304d79c1ce8c3e76368c622 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 66ed1786ad067198814e9b2ab54f0cad68a58f1e ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/e1000: fix link status crash in secondary process' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (51 preceding siblings ...)
2024-11-11 6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
` (67 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Jun Wang; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=be22d7ff5d84de7805ebe4e4db3993114ea16408
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From be22d7ff5d84de7805ebe4e4db3993114ea16408 Mon Sep 17 00:00:00 2001
From: Jun Wang <junwang01@cestc.cn>
Date: Fri, 12 Jul 2024 19:30:47 +0800
Subject: [PATCH] net/e1000: fix link status crash in secondary process
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 84506cfe07326fd6ddb158f3fa57bd678751561a ]
The code to update link status is not safe in secondary process.
If called from secondary it will crash, example from dumpcap:
/dpdk/app/dpdk-dumpcap -i 0000:00:04.0
File: /tmp/dpdk-dumpcap_0_0000:00:04.0_20240723020203.pcapng
Segmentation fault (core dumped)
Fixes: 805803445a02 ("e1000: support EM devices (also known as e1000/e1000e)")
Signed-off-by: Jun Wang <junwang01@cestc.cn>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/e1000/em_ethdev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index c5a4dec693..f6875b0762 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1136,6 +1136,9 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
struct rte_eth_link link;
int link_up, count;
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return -1;
+
link_up = 0;
hw->mac.get_link_status = 1;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.638718189 +0800
+++ 0053-net-e1000-fix-link-status-crash-in-secondary-process.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From 84506cfe07326fd6ddb158f3fa57bd678751561a Mon Sep 17 00:00:00 2001
+From be22d7ff5d84de7805ebe4e4db3993114ea16408 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 84506cfe07326fd6ddb158f3fa57bd678751561a ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cpfl: add checks for flow action types' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (52 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
` (66 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Praveen Shetty; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fa3204c299911a82fbcab1b3fac2adf8813e1621
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fa3204c299911a82fbcab1b3fac2adf8813e1621 Mon Sep 17 00:00:00 2001
From: Praveen Shetty <praveen.shetty@intel.com>
Date: Tue, 30 Jul 2024 11:45:40 +0000
Subject: [PATCH] net/cpfl: add checks for flow action types
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 86126195768418da56031305cdf3636ceb6650c8 ]
In CPFL PMD, port_representor action is used for the local vport and
represented_port action is used for the remote port (remote port in this
case is either the idpf pf or the vf port that is being represented by
the cpfl pmd). Using these the other way around is an error, so add a
check for those cases.
Fixes: 441e777b85f1 ("net/cpfl: support represented port action")
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/cpfl/cpfl_flow_engine_fxp.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/net/cpfl/cpfl_flow_engine_fxp.c b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
index f6bd1f7599..9101a0e506 100644
--- a/drivers/net/cpfl/cpfl_flow_engine_fxp.c
+++ b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
@@ -292,6 +292,17 @@ cpfl_fxp_parse_action(struct cpfl_itf *itf,
is_vsi = (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR ||
dst_itf->type == CPFL_ITF_TYPE_REPRESENTOR);
+ /* Added checks to throw an error for the invalid action types. */
+ if (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR &&
+ dst_itf->type == CPFL_ITF_TYPE_REPRESENTOR) {
+ PMD_DRV_LOG(ERR, "Cannot use port_representor action for the represented_port");
+ goto err;
+ }
+ if (action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT &&
+ dst_itf->type == CPFL_ITF_TYPE_VPORT) {
+ PMD_DRV_LOG(ERR, "Cannot use represented_port action for the local vport");
+ goto err;
+ }
if (is_vsi)
dev_id = cpfl_get_vsi_id(dst_itf);
else
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.684731688 +0800
+++ 0054-net-cpfl-add-checks-for-flow-action-types.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From 86126195768418da56031305cdf3636ceb6650c8 Mon Sep 17 00:00:00 2001
+From fa3204c299911a82fbcab1b3fac2adf8813e1621 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 86126195768418da56031305cdf3636ceb6650c8 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index b9e825ef57..2c75ea6577 100644
+index f6bd1f7599..9101a0e506 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/iavf: fix crash when link is unstable' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (53 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
` (65 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Kaiwen Deng; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=97d79944ae2b5d5736b2e7f5e643b4ad6be5fc25
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 97d79944ae2b5d5736b2e7f5e643b4ad6be5fc25 Mon Sep 17 00:00:00 2001
From: Kaiwen Deng <kaiwenx.deng@intel.com>
Date: Tue, 6 Aug 2024 08:35:27 +0800
Subject: [PATCH] net/iavf: fix crash when link is unstable
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 57ed9ca61f44ffc3801f55c749347bd717834008 ]
Physical link instability may cause a core dump because unstable
physical links can result in a large number of link change events. Some
of these events may be captured by vf before vf resources are allocated,
and that will result in a core dump.
This commit will check if vf_res is invalid before dereferencing it.
Fixes: 5e03e316c753 ("net/iavf: handle virtchnl event message without interrupt")
Signed-off-by: Kaiwen Deng <kaiwenx.deng@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_vchnl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 1111d30f57..8ca104c04e 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -255,8 +255,8 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len,
case VIRTCHNL_EVENT_LINK_CHANGE:
vf->link_up =
vpe->event_data.link_event.link_status;
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_CAP_ADV_LINK_SPEED) {
+ if (vf->vf_res != NULL &&
+ vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_ADV_LINK_SPEED) {
vf->link_speed =
vpe->event_data.link_event_adv.link_speed;
} else {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.770511586 +0800
+++ 0055-net-iavf-fix-crash-when-link-is-unstable.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From 57ed9ca61f44ffc3801f55c749347bd717834008 Mon Sep 17 00:00:00 2001
+From 97d79944ae2b5d5736b2e7f5e643b4ad6be5fc25 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 57ed9ca61f44ffc3801f55c749347bd717834008 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 6d5969f084..69420bc9b6 100644
+index 1111d30f57..8ca104c04e 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/cpfl: fix parsing protocol ID mask field' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (54 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
` (64 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Praveen Shetty; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f2cb061aa1b15f690a56ead9334aab21480c7cf6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f2cb061aa1b15f690a56ead9334aab21480c7cf6 Mon Sep 17 00:00:00 2001
From: Praveen Shetty <praveen.shetty@intel.com>
Date: Fri, 23 Aug 2024 11:14:50 +0000
Subject: [PATCH] net/cpfl: fix parsing protocol ID mask field
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8125fea74b860a71605dfe94dc03ef73c912813e ]
CPFL parser was incorrectly parsing the mask value of the next_proto_id
field from recipe.json file as a string instead of unsigned integer.
Fixes: 41f20298ee8c ("net/cpfl: parse flow offloading hint from JSON")
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/cpfl/cpfl_flow_parser.c | 34 +++++++++++++++++++----------
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 303e979015..a67c773d18 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -198,6 +198,8 @@ cpfl_flow_js_pattern_key_proto_field(json_t *ob_fields,
for (i = 0; i < len; i++) {
json_t *object;
const char *name, *mask;
+ uint32_t mask_32b = 0;
+ int ret;
object = json_array_get(ob_fields, i);
name = cpfl_json_t_to_string(object, "name");
@@ -213,20 +215,28 @@ cpfl_flow_js_pattern_key_proto_field(json_t *ob_fields,
if (js_field->type == RTE_FLOW_ITEM_TYPE_ETH ||
js_field->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- mask = cpfl_json_t_to_string(object, "mask");
- if (!mask) {
- PMD_DRV_LOG(ERR, "Can not parse string 'mask'.");
- goto err;
- }
- if (strlen(mask) > CPFL_JS_STR_SIZE - 1) {
- PMD_DRV_LOG(ERR, "The 'mask' is too long.");
- goto err;
+ /* Added a check for parsing mask value of the next_proto_id field. */
+ if (strcmp(name, "next_proto_id") == 0) {
+ ret = cpfl_json_t_to_uint32(object, "mask", &mask_32b);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Cannot parse uint32 'mask'.");
+ goto err;
+ }
+ js_field->fields[i].mask_32b = mask_32b;
+ } else {
+ mask = cpfl_json_t_to_string(object, "mask");
+ if (!mask) {
+ PMD_DRV_LOG(ERR, "Can not parse string 'mask'.");
+ goto err;
+ }
+ if (rte_strscpy(js_field->fields[i].mask,
+ mask, CPFL_JS_STR_SIZE) < 0) {
+ PMD_DRV_LOG(ERR, "The 'mask' is too long.");
+ goto err;
+ }
}
- strncpy(js_field->fields[i].mask, mask, CPFL_JS_STR_SIZE - 1);
- } else {
- uint32_t mask_32b;
- int ret;
+ } else {
ret = cpfl_json_t_to_uint32(object, "mask", &mask_32b);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Can not parse uint32 'mask'.");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.841481184 +0800
+++ 0056-net-cpfl-fix-parsing-protocol-ID-mask-field.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From 8125fea74b860a71605dfe94dc03ef73c912813e Mon Sep 17 00:00:00 2001
+From f2cb061aa1b15f690a56ead9334aab21480c7cf6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8125fea74b860a71605dfe94dc03ef73c912813e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ice/base: fix link speed for 200G' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (55 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
` (63 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Paul Greenwalt; +Cc: xuemingl, Soumyadeep Hore, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=07885f6f163c85a7476a309661084396922b3d5b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 07885f6f163c85a7476a309661084396922b3d5b Mon Sep 17 00:00:00 2001
From: Paul Greenwalt <paul.greenwalt@intel.com>
Date: Fri, 23 Aug 2024 09:56:45 +0000
Subject: [PATCH] net/ice/base: fix link speed for 200G
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e3992ab377d2879d6c5bfb220865638404b85dba ]
When setting PHY configuration during driver initialization, 200G link
speed is not being advertised even when the PHY is capable. This is
because the get PHY capabilities link speed response is being masked by
ICE_AQ_LINK_SPEED_M, which does not include 200G link speed bit.
Fixes: d13ad9cf1721 ("net/ice/base: add helper functions for PHY caching")
Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ice/base/ice_adminq_cmd.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 1131379d63..56ba2041f2 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1621,7 +1621,7 @@ struct ice_aqc_get_link_status_data {
#define ICE_AQ_LINK_PWR_QSFP_CLASS_3 2
#define ICE_AQ_LINK_PWR_QSFP_CLASS_4 3
__le16 link_speed;
-#define ICE_AQ_LINK_SPEED_M 0x7FF
+#define ICE_AQ_LINK_SPEED_M 0xFFF
#define ICE_AQ_LINK_SPEED_10MB BIT(0)
#define ICE_AQ_LINK_SPEED_100MB BIT(1)
#define ICE_AQ_LINK_SPEED_1000MB BIT(2)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.904353783 +0800
+++ 0057-net-ice-base-fix-link-speed-for-200G.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From e3992ab377d2879d6c5bfb220865638404b85dba Mon Sep 17 00:00:00 2001
+From 07885f6f163c85a7476a309661084396922b3d5b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e3992ab377d2879d6c5bfb220865638404b85dba ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 6a89e1614a..3ec207927b 100644
+index 1131379d63..56ba2041f2 100644
@@ -25 +27 @@
-@@ -1624,7 +1624,7 @@ struct ice_aqc_get_link_status_data {
+@@ -1621,7 +1621,7 @@ struct ice_aqc_get_link_status_data {
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (56 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
` (62 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Fabio Pricoco
Cc: xuemingl, Jacob Keller, Soumyadeep Hore, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cfaccd4bdaa1a89f4386adbe1a011012a386f56a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cfaccd4bdaa1a89f4386adbe1a011012a386f56a Mon Sep 17 00:00:00 2001
From: Fabio Pricoco <fabio.pricoco@intel.com>
Date: Fri, 23 Aug 2024 09:56:42 +0000
Subject: [PATCH] net/ice/base: fix iteration of TLVs in Preserved Fields Area
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit dcb760bf0f951b404bce33a1dd14906154b58c75 ]
The ice_get_pfa_module_tlv() function iterates over the Preserved Fields
Area to read data from the Shadow RAM, including the Part Board Assembly
data, among others.
If the specific TLV being requested is not found in the current NVM, the
code will read past the end of the PFA, misinterpreting the last word of
the PFA and the word just after the PFA as another TLV. This typically
results in one extra iteration before the length check of the while loop
is triggered.
Correct the logic for determining the maximum PFA offset to include the
extra last word. Additionally, make the driver robust against overflows
by using check_add_overflow. This ensures that even if the NVM provides
bogus data, the driver will not overflow, and will instead log a useful
warning message. The check for whether the TLV length exceeds the PFA
length is also removed, in favor of relying on the overflow warning
instead.
Fixes: 5d0b7b5fc491 ("net/ice/base: add read PBA module function")
Signed-off-by: Fabio Pricoco <fabio.pricoco@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ice/base/ice_nvm.c | 36 ++++++++++++++++++++++------------
1 file changed, 24 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index 6b0794f562..98c4c943ca 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -471,6 +471,8 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
return status;
}
+#define check_add_overflow __builtin_add_overflow
+
/**
* ice_get_pfa_module_tlv - Reads sub module TLV from NVM PFA
* @hw: pointer to hardware structure
@@ -487,8 +489,7 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
u16 module_type)
{
enum ice_status status;
- u16 pfa_len, pfa_ptr;
- u32 next_tlv;
+ u16 pfa_len, pfa_ptr, next_tlv, max_tlv;
status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr);
if (status != ICE_SUCCESS) {
@@ -500,11 +501,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n");
return status;
}
- /* Starting with first TLV after PFA length, iterate through the list
+
+ if (check_add_overflow(pfa_ptr, (u16)(pfa_len - 1), &max_tlv)) {
+ ice_debug(hw, ICE_DBG_INIT, "PFA starts at offset %u. PFA length of %u caused 16-bit arithmetic overflow.\n",
+ pfa_ptr, pfa_len);
+ return ICE_ERR_INVAL_SIZE;
+ }
+
+ /* The Preserved Fields Area contains a sequence of TLVs which define
+ * its contents. The PFA length includes all of the TLVs, plus its
+ * initial length word itself, *and* one final word at the end of all
+ * of the TLVs.
+ *
+ * Starting with first TLV after PFA length, iterate through the list
* of TLVs to find the requested one.
*/
next_tlv = pfa_ptr + 1;
- while (next_tlv < ((u32)pfa_ptr + pfa_len)) {
+ while (next_tlv < max_tlv) {
u16 tlv_sub_module_type;
u16 tlv_len;
@@ -521,10 +534,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV length.\n");
break;
}
- if (tlv_len > pfa_len) {
- ice_debug(hw, ICE_DBG_INIT, "Invalid TLV length.\n");
- return ICE_ERR_INVAL_SIZE;
- }
if (tlv_sub_module_type == module_type) {
if (tlv_len) {
*module_tlv = (u16)next_tlv;
@@ -533,10 +542,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
}
return ICE_ERR_INVAL_SIZE;
}
- /* Check next TLV, i.e. current TLV pointer + length + 2 words
- * (for current TLV's type and length)
- */
- next_tlv = next_tlv + tlv_len + 2;
+
+ if (check_add_overflow(next_tlv, (u16)2, &next_tlv) ||
+ check_add_overflow(next_tlv, tlv_len, &next_tlv)) {
+ ice_debug(hw, ICE_DBG_INIT, "TLV of type %u and length 0x%04x caused 16-bit arithmetic overflow. The PFA starts at 0x%04x and has length of 0x%04x\n",
+ tlv_sub_module_type, tlv_len, pfa_ptr, pfa_len);
+ return ICE_ERR_INVAL_SIZE;
+ }
}
/* Module does not exist */
return ICE_ERR_DOES_NOT_EXIST;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.979390082 +0800
+++ 0058-net-ice-base-fix-iteration-of-TLVs-in-Preserved-Fiel.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From dcb760bf0f951b404bce33a1dd14906154b58c75 Mon Sep 17 00:00:00 2001
+From cfaccd4bdaa1a89f4386adbe1a011012a386f56a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit dcb760bf0f951b404bce33a1dd14906154b58c75 ]
@@ -25 +27,0 @@
-Cc: stable@dpdk.org
@@ -36 +38 @@
-index 5e982de4b5..56c6c96a95 100644
+index 6b0794f562..98c4c943ca 100644
@@ -39 +41 @@
-@@ -469,6 +469,8 @@ int ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+@@ -471,6 +471,8 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
@@ -48,2 +50 @@
-@@ -484,8 +486,7 @@ int
- ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+@@ -487,8 +489,7 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
@@ -51,0 +53 @@
+ enum ice_status status;
@@ -55 +56,0 @@
- int status;
@@ -58 +59,2 @@
-@@ -498,11 +499,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+ if (status != ICE_SUCCESS) {
+@@ -500,11 +501,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
@@ -84 +86 @@
-@@ -519,10 +532,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+@@ -521,10 +534,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
@@ -95 +97 @@
-@@ -531,10 +540,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+@@ -533,10 +542,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ixgbe/base: fix unchecked return value' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (57 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
` (61 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Barbara Skobiej; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1d3aa94783958688adbc498e1bf6413201dd28cb
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1d3aa94783958688adbc498e1bf6413201dd28cb Mon Sep 17 00:00:00 2001
From: Barbara Skobiej <barbara.skobiej@intel.com>
Date: Thu, 29 Aug 2024 10:00:11 +0100
Subject: [PATCH] net/ixgbe/base: fix unchecked return value
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit eb3684b191928ebb5d263e3f8ab1e309bfec099e ]
There was unchecked return value in the ixgbe_stop_mac_link_on_d3_82599
function. Added checking of return value from the called function
ixgbe_read_eeprom.
Fixes: b7ad3713b958 ("ixgbe/base: allow to disable link on D3")
Signed-off-by: Barbara Skobiej <barbara.skobiej@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/base/ixgbe_82599.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ixgbe/base/ixgbe_82599.c b/drivers/net/ixgbe/base/ixgbe_82599.c
index c6e8b7e976..f37d83a0ab 100644
--- a/drivers/net/ixgbe/base/ixgbe_82599.c
+++ b/drivers/net/ixgbe/base/ixgbe_82599.c
@@ -554,13 +554,15 @@ out:
**/
void ixgbe_stop_mac_link_on_d3_82599(struct ixgbe_hw *hw)
{
- u32 autoc2_reg;
u16 ee_ctrl_2 = 0;
+ u32 autoc2_reg;
+ u32 status;
DEBUGFUNC("ixgbe_stop_mac_link_on_d3_82599");
- ixgbe_read_eeprom(hw, IXGBE_EEPROM_CTRL_2, &ee_ctrl_2);
+ status = ixgbe_read_eeprom(hw, IXGBE_EEPROM_CTRL_2, &ee_ctrl_2);
- if (!ixgbe_mng_present(hw) && !hw->wol_enabled &&
+ if (status == IXGBE_SUCCESS &&
+ !ixgbe_mng_present(hw) && !hw->wol_enabled &&
ee_ctrl_2 & IXGBE_EEPROM_CCD_BIT) {
autoc2_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC2);
autoc2_reg |= IXGBE_AUTOC2_LINK_DISABLE_ON_D3_MASK;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.057296280 +0800
+++ 0059-net-ixgbe-base-fix-unchecked-return-value.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From eb3684b191928ebb5d263e3f8ab1e309bfec099e Mon Sep 17 00:00:00 2001
+From 1d3aa94783958688adbc498e1bf6413201dd28cb Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit eb3684b191928ebb5d263e3f8ab1e309bfec099e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index c4ad906f0f..3110477700 100644
+index c6e8b7e976..f37d83a0ab 100644
@@ -24 +26 @@
-@@ -556,13 +556,15 @@ out:
+@@ -554,13 +554,15 @@ out:
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix setting flags in init function' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (58 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
` (60 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c04a0407ada46498d4a261aa53fca0f0ef118cb2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c04a0407ada46498d4a261aa53fca0f0ef118cb2 Mon Sep 17 00:00:00 2001
From: Anatoly Burakov <anatoly.burakov@intel.com>
Date: Mon, 2 Sep 2024 10:54:17 +0100
Subject: [PATCH] net/i40e/base: fix setting flags in init function
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit deb7c447d088903d06a76e2c719a8207c94a576e ]
The functionality to set i40e_hw's flags was moved to its own function
in AQ a while ago. However, the setting of hw->flags for X722 was not
removed, even though it has become unnecessary.
Fixes: 37b091c75b13 ("net/i40e/base: extend PHY access AQ command")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index ab655a0a72..821fb2fb36 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -1019,9 +1019,6 @@ enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw)
else
hw->pf_id = (u8)(func_rid & 0x7);
- if (hw->mac.type == I40E_MAC_X722)
- hw->flags |= I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE |
- I40E_HW_FLAG_NVM_READ_REQUIRES_LOCK;
/* NVMUpdate features structure initialization */
hw->nvmupd_features.major = I40E_NVMUPD_FEATURES_API_VER_MAJOR;
hw->nvmupd_features.minor = I40E_NVMUPD_FEATURES_API_VER_MINOR;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.113492979 +0800
+++ 0060-net-i40e-base-fix-setting-flags-in-init-function.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From deb7c447d088903d06a76e2c719a8207c94a576e Mon Sep 17 00:00:00 2001
+From c04a0407ada46498d4a261aa53fca0f0ef118cb2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit deb7c447d088903d06a76e2c719a8207c94a576e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index e4de508aea..451cc2c1c7 100644
+index ab655a0a72..821fb2fb36 100644
@@ -23 +25 @@
-@@ -980,9 +980,6 @@ enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw)
+@@ -1019,9 +1019,6 @@ enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix misleading debug logs and comments' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (59 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
` (59 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Aleksandr Loktionov
Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5ffbf6263003cc122adc4edadab31329d028c3c2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5ffbf6263003cc122adc4edadab31329d028c3c2 Mon Sep 17 00:00:00 2001
From: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Date: Mon, 2 Sep 2024 10:54:18 +0100
Subject: [PATCH] net/i40e/base: fix misleading debug logs and comments
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 719ec1bfebde956b661d403ef73ecb1e7483d50f ]
Both comments and debug logs for i40e_read_nvm_aq refer to writing, when
in actuality it's a read function. Fix both comments and debug logs.
Fixes: a8ac0bae54ae ("i40e/base: update shadow RAM read/write")
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_nvm.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_nvm.c b/drivers/net/i40e/base/i40e_nvm.c
index f385042601..05816a4b79 100644
--- a/drivers/net/i40e/base/i40e_nvm.c
+++ b/drivers/net/i40e/base/i40e_nvm.c
@@ -223,11 +223,11 @@ read_nvm_exit:
* @hw: pointer to the HW structure.
* @module_pointer: module pointer location in words from the NVM beginning
* @offset: offset in words from module start
- * @words: number of words to write
- * @data: buffer with words to write to the Shadow RAM
+ * @words: number of words to read
+ * @data: buffer with words to read from the Shadow RAM
* @last_command: tells the AdminQ that this is the last command
*
- * Writes a 16 bit words buffer to the Shadow RAM using the admin command.
+ * Reads a 16 bit words buffer to the Shadow RAM using the admin command.
**/
STATIC enum i40e_status_code i40e_read_nvm_aq(struct i40e_hw *hw,
u8 module_pointer, u32 offset,
@@ -249,18 +249,18 @@ STATIC enum i40e_status_code i40e_read_nvm_aq(struct i40e_hw *hw,
*/
if ((offset + words) > hw->nvm.sr_size)
i40e_debug(hw, I40E_DEBUG_NVM,
- "NVM write error: offset %d beyond Shadow RAM limit %d\n",
+ "NVM read error: offset %d beyond Shadow RAM limit %d\n",
(offset + words), hw->nvm.sr_size);
else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS)
- /* We can write only up to 4KB (one sector), in one AQ write */
+ /* We can read only up to 4KB (one sector), in one AQ read */
i40e_debug(hw, I40E_DEBUG_NVM,
- "NVM write fail error: tried to write %d words, limit is %d.\n",
+ "NVM read fail error: tried to read %d words, limit is %d.\n",
words, I40E_SR_SECTOR_SIZE_IN_WORDS);
else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS)
!= (offset / I40E_SR_SECTOR_SIZE_IN_WORDS))
- /* A single write cannot spread over two sectors */
+ /* A single read cannot spread over two sectors */
i40e_debug(hw, I40E_DEBUG_NVM,
- "NVM write error: cannot spread over two sectors in a single write offset=%d words=%d\n",
+ "NVM read error: cannot spread over two sectors in a single read offset=%d words=%d\n",
offset, words);
else
ret_code = i40e_aq_read_nvm(hw, module_pointer,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.148919978 +0800
+++ 0061-net-i40e-base-fix-misleading-debug-logs-and-comments.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From 719ec1bfebde956b661d403ef73ecb1e7483d50f Mon Sep 17 00:00:00 2001
+From 5ffbf6263003cc122adc4edadab31329d028c3c2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 719ec1bfebde956b661d403ef73ecb1e7483d50f ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: add missing X710TL device check' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (60 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
` (58 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9f966b465d8ce98430feeb81a1db414b83a26a8d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9f966b465d8ce98430feeb81a1db414b83a26a8d Mon Sep 17 00:00:00 2001
From: Anatoly Burakov <anatoly.burakov@intel.com>
Date: Mon, 2 Sep 2024 10:54:19 +0100
Subject: [PATCH] net/i40e/base: add missing X710TL device check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 597e19e7eae17beb820795c3a8a97c547870ba26 ]
Commit c0ce1c4677fd ("net/i40e: add new X722 device") added a new X722
define as well as one for X710T*L (that wasn't called out in commit
message), however it was not added to the I40E_IS_X710TL_DEVICE check.
This patch adds the missing define to the check.
Fixes: c0ce1c4677fd ("net/i40e: add new X722 device")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_devids.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/base/i40e_devids.h b/drivers/net/i40e/base/i40e_devids.h
index ee31e51f57..37d7ee9939 100644
--- a/drivers/net/i40e/base/i40e_devids.h
+++ b/drivers/net/i40e/base/i40e_devids.h
@@ -42,7 +42,8 @@
#define I40E_DEV_ID_10G_SFP 0x104E
#define I40E_IS_X710TL_DEVICE(d) \
(((d) == I40E_DEV_ID_10G_BASE_T_BC) || \
- ((d) == I40E_DEV_ID_5G_BASE_T_BC))
+ ((d) == I40E_DEV_ID_5G_BASE_T_BC) || \
+ ((d) == I40E_DEV_ID_1G_BASE_T_BC))
#define I40E_DEV_ID_KX_X722 0x37CE
#define I40E_DEV_ID_QSFP_X722 0x37CF
#define I40E_DEV_ID_SFP_X722 0x37D0
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.181442678 +0800
+++ 0062-net-i40e-base-add-missing-X710TL-device-check.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From 597e19e7eae17beb820795c3a8a97c547870ba26 Mon Sep 17 00:00:00 2001
+From 9f966b465d8ce98430feeb81a1db414b83a26a8d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 597e19e7eae17beb820795c3a8a97c547870ba26 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 11e98f1f28..0a323566d1 100644
+index ee31e51f57..37d7ee9939 100644
@@ -24,2 +26,2 @@
-@@ -31,7 +31,8 @@
- #define I40E_DEV_ID_1G_BASE_T_BC 0x0DD2
+@@ -42,7 +42,8 @@
+ #define I40E_DEV_ID_10G_SFP 0x104E
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix blinking X722 with X557 PHY' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (61 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
` (57 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Eryk Rybak; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cd6164532228038bc6c4d96e51e83f241a53b7da
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cd6164532228038bc6c4d96e51e83f241a53b7da Mon Sep 17 00:00:00 2001
From: Eryk Rybak <eryk.roch.rybak@intel.com>
Date: Mon, 2 Sep 2024 10:54:23 +0100
Subject: [PATCH] net/i40e/base: fix blinking X722 with X557 PHY
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit bf0183e9ab98c946e0c7e178149e4b685465b9b1 ]
On x722 with x557 PHY LEDs do not blink under certain circumstances,
because the function was attempting to avoid triggering LED activity when
it detected that LED was already active. Fix it to just always trigger
LED blinking regardless of the LED state.
Fixes: 8db9e2a1b232 ("i40e: base driver")
Signed-off-by: Eryk Rybak <eryk.roch.rybak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 32 -----------------------------
1 file changed, 32 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 821fb2fb36..243c547d19 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -1587,7 +1587,6 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx)
**/
u32 i40e_led_get(struct i40e_hw *hw)
{
- u32 current_mode = 0;
u32 mode = 0;
int i;
@@ -1600,21 +1599,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
if (!gpio_val)
continue;
- /* ignore gpio LED src mode entries related to the activity
- * LEDs
- */
- current_mode = ((gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK)
- >> I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT);
- switch (current_mode) {
- case I40E_COMBINED_ACTIVITY:
- case I40E_FILTER_ACTIVITY:
- case I40E_MAC_ACTIVITY:
- case I40E_LINK_ACTIVITY:
- continue;
- default:
- break;
- }
-
mode = (gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK) >>
I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT;
break;
@@ -1634,7 +1618,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
**/
void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
{
- u32 current_mode = 0;
int i;
if (mode & ~I40E_LED_MODE_VALID) {
@@ -1651,21 +1634,6 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
if (!gpio_val)
continue;
- /* ignore gpio LED src mode entries related to the activity
- * LEDs
- */
- current_mode = ((gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK)
- >> I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT);
- switch (current_mode) {
- case I40E_COMBINED_ACTIVITY:
- case I40E_FILTER_ACTIVITY:
- case I40E_MAC_ACTIVITY:
- case I40E_LINK_ACTIVITY:
- continue;
- default:
- break;
- }
-
if (I40E_IS_X710TL_DEVICE(hw->device_id)) {
u32 pin_func = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.214233477 +0800
+++ 0063-net-i40e-base-fix-blinking-X722-with-X557-PHY.patch 2024-11-11 14:23:05.152192839 +0800
@@ -1 +1 @@
-From bf0183e9ab98c946e0c7e178149e4b685465b9b1 Mon Sep 17 00:00:00 2001
+From cd6164532228038bc6c4d96e51e83f241a53b7da Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit bf0183e9ab98c946e0c7e178149e4b685465b9b1 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index be27cc9d0b..80500697ed 100644
+index 821fb2fb36..243c547d19 100644
@@ -25 +27 @@
-@@ -1548,7 +1548,6 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx)
+@@ -1587,7 +1587,6 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx)
@@ -33 +35 @@
-@@ -1561,21 +1560,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
+@@ -1600,21 +1599,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
@@ -55 +57 @@
-@@ -1595,7 +1579,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
+@@ -1634,7 +1618,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
@@ -63 +65 @@
-@@ -1612,21 +1595,6 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
+@@ -1651,21 +1634,6 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix DDP loading with reserved track ID' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (62 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
` (56 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Artur Tyminski; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=77d7459a65fb61be0f040a9ce78ae63d360e4f4e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 77d7459a65fb61be0f040a9ce78ae63d360e4f4e Mon Sep 17 00:00:00 2001
From: Artur Tyminski <arturx.tyminski@intel.com>
Date: Mon, 2 Sep 2024 10:54:26 +0100
Subject: [PATCH] net/i40e/base: fix DDP loading with reserved track ID
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f646061cd9328f1265d8b9996c9b734ab2ce3707 ]
Packages with reserved track IDs should not be loaded, yet currently,
the driver will only check one of the reserved ID's, but not the other.
Fix the DDP package loading to also check for the other reserved track
ID.
Fixes: 496a357f1118 ("net/i40e/base: extend processing of DDP")
Signed-off-by: Artur Tyminski <arturx.tyminski@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 243c547d19..4d565963a9 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -8168,7 +8168,8 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u32 sec_off;
u32 i;
- if (track_id == I40E_DDP_TRACKID_INVALID) {
+ if (track_id == I40E_DDP_TRACKID_INVALID ||
+ track_id == I40E_DDP_TRACKID_RDONLY) {
i40e_debug(hw, I40E_DEBUG_PACKAGE, "Invalid track_id\n");
return I40E_NOT_SUPPORTED;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.250022976 +0800
+++ 0064-net-i40e-base-fix-DDP-loading-with-reserved-track-ID.patch 2024-11-11 14:23:05.162192839 +0800
@@ -1 +1 @@
-From f646061cd9328f1265d8b9996c9b734ab2ce3707 Mon Sep 17 00:00:00 2001
+From 77d7459a65fb61be0f040a9ce78ae63d360e4f4e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f646061cd9328f1265d8b9996c9b734ab2ce3707 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index a43b89aaeb..693608ac99 100644
+index 243c547d19..4d565963a9 100644
@@ -25 +27 @@
-@@ -8048,7 +8048,8 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
+@@ -8168,7 +8168,8 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix repeated register dumps' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (63 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
` (55 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Radoslaw Tyl; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7c44c1b3f8b013f9e53710ddc05663dfe98a8f33
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7c44c1b3f8b013f9e53710ddc05663dfe98a8f33 Mon Sep 17 00:00:00 2001
From: Radoslaw Tyl <radoslawx.tyl@intel.com>
Date: Mon, 2 Sep 2024 10:54:33 +0100
Subject: [PATCH] net/i40e/base: fix repeated register dumps
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit efc6a6b1facfa160e5e72f55893a301a6b27c628 ]
Currently, when registers are dumped, the data inside them is changed,
so repeated dumps lead to unexpected results. Fix this by making
register list read-only.
Fixes: 8db9e2a1b232 ("i40e: base driver")
Signed-off-by: Radoslaw Tyl <radoslawx.tyl@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_diag.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_diag.c b/drivers/net/i40e/base/i40e_diag.c
index b3c4cfd3aa..4ca102cdd5 100644
--- a/drivers/net/i40e/base/i40e_diag.c
+++ b/drivers/net/i40e/base/i40e_diag.c
@@ -55,7 +55,7 @@ static enum i40e_status_code i40e_diag_reg_pattern_test(struct i40e_hw *hw,
return I40E_SUCCESS;
}
-static struct i40e_diag_reg_test_info i40e_reg_list[] = {
+static const struct i40e_diag_reg_test_info i40e_reg_list[] = {
/* offset mask elements stride */
{I40E_QTX_CTL(0), 0x0000FFBF, 1, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
{I40E_PFINT_ITR0(0), 0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)},
@@ -81,28 +81,28 @@ enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw)
{
enum i40e_status_code ret_code = I40E_SUCCESS;
u32 reg, mask;
+ u32 elements;
u32 i, j;
for (i = 0; i40e_reg_list[i].offset != 0 &&
ret_code == I40E_SUCCESS; i++) {
+ elements = i40e_reg_list[i].elements;
/* set actual reg range for dynamically allocated resources */
if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) &&
hw->func_caps.num_tx_qp != 0)
- i40e_reg_list[i].elements = hw->func_caps.num_tx_qp;
+ elements = hw->func_caps.num_tx_qp;
if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) ||
i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) ||
i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) ||
i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) ||
i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) &&
hw->func_caps.num_msix_vectors != 0)
- i40e_reg_list[i].elements =
- hw->func_caps.num_msix_vectors - 1;
+ elements = hw->func_caps.num_msix_vectors - 1;
/* test register access */
mask = i40e_reg_list[i].mask;
- for (j = 0; j < i40e_reg_list[i].elements &&
- ret_code == I40E_SUCCESS; j++) {
+ for (j = 0; j < elements && ret_code == I40E_SUCCESS; j++) {
reg = i40e_reg_list[i].offset
+ (j * i40e_reg_list[i].stride);
ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.297586475 +0800
+++ 0065-net-i40e-base-fix-repeated-register-dumps.patch 2024-11-11 14:23:05.162192839 +0800
@@ -1 +1 @@
-From efc6a6b1facfa160e5e72f55893a301a6b27c628 Mon Sep 17 00:00:00 2001
+From 7c44c1b3f8b013f9e53710ddc05663dfe98a8f33 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit efc6a6b1facfa160e5e72f55893a301a6b27c628 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix unchecked return value' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (64 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
` (54 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Barbara Skobiej; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bfc9bdcad325d486a798120507a7d93bfce2473d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bfc9bdcad325d486a798120507a7d93bfce2473d Mon Sep 17 00:00:00 2001
From: Barbara Skobiej <barbara.skobiej@intel.com>
Date: Mon, 2 Sep 2024 10:54:34 +0100
Subject: [PATCH] net/i40e/base: fix unchecked return value
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7fb34b9141aab299c2b84656ec5b12bf41f1c21d ]
Static analysis tools have reported an unchecked return value warning.
Address the warning by checking return value.
Fixes: 2450cc2dc871 ("i40e/base: find partition id in NPAR mode and disable FCoE")
Signed-off-by: Barbara Skobiej <barbara.skobiej@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 4d565963a9..547f5e3c2c 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -4228,8 +4228,8 @@ STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
/* use AQ read to get the physical register offset instead
* of the port relative offset
*/
- i40e_aq_debug_read_register(hw, port_cfg_reg, &port_cfg, NULL);
- if (!(port_cfg & I40E_PRTGEN_CNF_PORT_DIS_MASK))
+ status = i40e_aq_debug_read_register(hw, port_cfg_reg, &port_cfg, NULL);
+ if ((status == I40E_SUCCESS) && (!(port_cfg & I40E_PRTGEN_CNF_PORT_DIS_MASK)))
hw->num_ports++;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.330951475 +0800
+++ 0066-net-i40e-base-fix-unchecked-return-value.patch 2024-11-11 14:23:05.162192839 +0800
@@ -1 +1 @@
-From 7fb34b9141aab299c2b84656ec5b12bf41f1c21d Mon Sep 17 00:00:00 2001
+From bfc9bdcad325d486a798120507a7d93bfce2473d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7fb34b9141aab299c2b84656ec5b12bf41f1c21d ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 416f31dcc3..07e18deaea 100644
+index 4d565963a9..547f5e3c2c 100644
@@ -23 +25 @@
-@@ -4215,8 +4215,8 @@ STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
+@@ -4228,8 +4228,8 @@ STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e/base: fix loop bounds' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (65 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
` (53 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Barbara Skobiej; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=453f75257c416c6fd10b2011fcd463275e017ab4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 453f75257c416c6fd10b2011fcd463275e017ab4 Mon Sep 17 00:00:00 2001
From: Barbara Skobiej <barbara.skobiej@intel.com>
Date: Mon, 2 Sep 2024 10:54:35 +0100
Subject: [PATCH] net/i40e/base: fix loop bounds
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3e61fe48412f46daa66f7ccc8f03b1e7620d0b64 ]
An unchecked value used as a loop bound. Add verification if value of
'next_to_clean' variable is greater than 2^10 (next_to_clean is 10 bits).
Also, refactored loop so that it reads the head value only once, and also
checks if head is invalid.
Fixes: 8db9e2a1b232 ("i40e: base driver")
Signed-off-by: Barbara Skobiej <barbara.skobiej@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_adminq.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_adminq.c b/drivers/net/i40e/base/i40e_adminq.c
index 27c82d9b44..cd3b0f2e45 100644
--- a/drivers/net/i40e/base/i40e_adminq.c
+++ b/drivers/net/i40e/base/i40e_adminq.c
@@ -791,12 +791,26 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
u16 ntc = asq->next_to_clean;
struct i40e_aq_desc desc_cb;
struct i40e_aq_desc *desc;
+ u32 head = 0;
+
+ if (ntc >= (1 << 10))
+ goto clean_asq_exit;
desc = I40E_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
- while (rd32(hw, hw->aq.asq.head) != ntc) {
+ while (true) {
+ head = rd32(hw, hw->aq.asq.head);
+
+ if (head >= asq->count) {
+ i40e_debug(hw, I40E_DEBUG_AQ_COMMAND, "Read head value is improper\n");
+ return 0;
+ }
+
+ if (head == ntc)
+ break;
+
i40e_debug(hw, I40E_DEBUG_AQ_COMMAND,
- "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+ "ntc %d head %d.\n", ntc, head);
if (details->callback) {
I40E_ADMINQ_CALLBACK cb_func =
@@ -816,6 +830,7 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
asq->next_to_clean = ntc;
+clean_asq_exit:
return I40E_DESC_UNUSED(asq);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.369502474 +0800
+++ 0067-net-i40e-base-fix-loop-bounds.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 3e61fe48412f46daa66f7ccc8f03b1e7620d0b64 Mon Sep 17 00:00:00 2001
+From 453f75257c416c6fd10b2011fcd463275e017ab4 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3e61fe48412f46daa66f7ccc8f03b1e7620d0b64 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index b670250180..350288269b 100644
+index 27c82d9b44..cd3b0f2e45 100644
@@ -26 +28 @@
-@@ -745,12 +745,26 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
+@@ -791,12 +791,26 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
@@ -55 +57 @@
-@@ -770,6 +784,7 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
+@@ -816,6 +830,7 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/iavf: delay VF reset command' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (66 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
` (52 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, David Marchand, Hongbo Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=07e76f97817655b73aeebfd3dff698a4a1f5be4d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 07e76f97817655b73aeebfd3dff698a4a1f5be4d Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Mon, 9 Sep 2024 12:03:56 +0100
Subject: [PATCH] net/iavf: delay VF reset command
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b34fe66ea893c74f09322dc1109e80e81faa7d4f ]
Commit 0f9ec0cbd2a9 ("net/iavf: fix VF reset when using DCF"),
introduced a VF-reset adminq call into the reset sequence for iavf.
However, that call was very early in the sequence before other adminq
commands had been sent.
To delay the VF reset, we can put the message sending in the "dev_close"
function, right before the adminq is shut down, and thereby guaranteeing
that we won't have any subsequent issues with adminq messages.
In the process of making this change, we can also use the iavf_vf_reset
function from common/iavf, rather than hard-coding the message sending
lower-level calls in the net driver.
Fixes: e74e1bb6280d ("net/iavf: enable port reset")
Fixes: 0f9ec0cbd2a9 ("net/iavf: fix VF reset when using DCF")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Hongbo Li <hongbox.li@intel.com>
---
drivers/common/iavf/iavf_prototype.h | 1 +
drivers/common/iavf/version.map | 1 +
drivers/net/iavf/iavf_ethdev.c | 12 +-----------
3 files changed, 3 insertions(+), 11 deletions(-)
diff --git a/drivers/common/iavf/iavf_prototype.h b/drivers/common/iavf/iavf_prototype.h
index ba78ec5169..7c43a817bb 100644
--- a/drivers/common/iavf/iavf_prototype.h
+++ b/drivers/common/iavf/iavf_prototype.h
@@ -79,6 +79,7 @@ STATIC INLINE struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
__rte_internal
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg);
+__rte_internal
enum iavf_status iavf_vf_reset(struct iavf_hw *hw);
__rte_internal
enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
diff --git a/drivers/common/iavf/version.map b/drivers/common/iavf/version.map
index e0f117197c..6c1427cca4 100644
--- a/drivers/common/iavf/version.map
+++ b/drivers/common/iavf/version.map
@@ -7,6 +7,7 @@ INTERNAL {
iavf_set_mac_type;
iavf_shutdown_adminq;
iavf_vf_parse_hw_config;
+ iavf_vf_reset;
local: *;
};
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 9087909ec2..1a98c7734c 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2875,6 +2875,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
if (vf->promisc_unicast_enabled || vf->promisc_multicast_enabled)
iavf_config_promisc(adapter, false, false);
+ iavf_vf_reset(hw);
iavf_shutdown_adminq(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* disable uio intr before callback unregister */
@@ -2954,17 +2955,6 @@ iavf_dev_reset(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
-
- if (!vf->in_reset_recovery) {
- ret = iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
- IAVF_SUCCESS, NULL, 0, NULL);
- if (ret) {
- PMD_DRV_LOG(ERR, "fail to send cmd VIRTCHNL_OP_RESET_VF");
- return ret;
- }
- }
-
/*
* Check whether the VF reset has been done and inform application,
* to avoid calling the virtual channel command, which may cause
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.403242673 +0800
+++ 0068-net-iavf-delay-VF-reset-command.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From b34fe66ea893c74f09322dc1109e80e81faa7d4f Mon Sep 17 00:00:00 2001
+From 07e76f97817655b73aeebfd3dff698a4a1f5be4d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b34fe66ea893c74f09322dc1109e80e81faa7d4f ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
@@ -57 +59 @@
-index c56fcfadf0..c200f63b4f 100644
+index 9087909ec2..1a98c7734c 100644
@@ -60 +62 @@
-@@ -2962,6 +2962,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
+@@ -2875,6 +2875,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
@@ -68 +70 @@
-@@ -3041,17 +3042,6 @@ iavf_dev_reset(struct rte_eth_dev *dev)
+@@ -2954,17 +2955,6 @@ iavf_dev_reset(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (67 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ice: " Xueming Li
` (51 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ff90a3bb8523c29d5e02b6ff2c8e79345ba177be
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ff90a3bb8523c29d5e02b6ff2c8e79345ba177be Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:24 +0100
Subject: [PATCH] net/i40e: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2d040df2437a025ef6d2ecf72de96d5c9fe97439 ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: 5171b4ee6b6b ("net/i40e: optimize Tx by using AVX512")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index f3050cd06c..62fce19dc4 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -799,6 +799,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_load_si512(&txep[copied]);
const __m512i b = _mm512_load_si512(&txep[copied + 8]);
const __m512i c = _mm512_load_si512(&txep[copied + 16]);
@@ -808,6 +809,12 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_load_si512(&txep[copied]);
+ const __m512i b = _mm512_load_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.441488072 +0800
+++ 0069-net-i40e-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 2d040df2437a025ef6d2ecf72de96d5c9fe97439 Mon Sep 17 00:00:00 2001
+From ff90a3bb8523c29d5e02b6ff2c8e79345ba177be Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2d040df2437a025ef6d2ecf72de96d5c9fe97439 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 0238b03f8a..3b2750221b 100644
+index f3050cd06c..62fce19dc4 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ice: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (68 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: " Xueming Li
` (50 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d3a59470caf12bea8b3cf166d7965509b2e1de5a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d3a59470caf12bea8b3cf166d7965509b2e1de5a Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:25 +0100
Subject: [PATCH] net/ice: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit da97aeafca4cdd40892ffb7e628bb15dcf9c0f25 ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: a4e480de268e ("net/ice: optimize Tx by using AVX512")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/net/ice/ice_rxtx_vec_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 04148e8ea2..add095ef06 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -907,6 +907,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_loadu_si512(&txep[copied]);
const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
@@ -916,6 +917,12 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_loadu_si512(&txep[copied]);
+ const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.474181072 +0800
+++ 0070-net-ice-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From da97aeafca4cdd40892ffb7e628bb15dcf9c0f25 Mon Sep 17 00:00:00 2001
+From d3a59470caf12bea8b3cf166d7965509b2e1de5a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit da97aeafca4cdd40892ffb7e628bb15dcf9c0f25 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/iavf: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (69 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ice: " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: " Xueming Li
` (49 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bbcdd3ea1a4193c5e2198cabdd843899669acf61
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bbcdd3ea1a4193c5e2198cabdd843899669acf61 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:26 +0100
Subject: [PATCH] net/iavf: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 77608b24bdd840d323ebd9cb6ffffaf5c760983e ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: 9ab9514c150e ("net/iavf: enable AVX512 for Tx")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 7a7df6d258..0e94eada4a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1892,6 +1892,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_loadu_si512(&txep[copied]);
const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
@@ -1901,6 +1902,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_loadu_si512(&txep[copied]);
+ const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.507644371 +0800
+++ 0071-net-iavf-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 77608b24bdd840d323ebd9cb6ffffaf5c760983e Mon Sep 17 00:00:00 2001
+From bbcdd3ea1a4193c5e2198cabdd843899669acf61 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 77608b24bdd840d323ebd9cb6ffffaf5c760983e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 3bb6f305df..d6a861bf80 100644
+index 7a7df6d258..0e94eada4a 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'common/idpf: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (70 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/iavf: " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
` (48 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fbbdaf66bbe686f9f864e55f4dc90a95f0b49637
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fbbdaf66bbe686f9f864e55f4dc90a95f0b49637 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:27 +0100
Subject: [PATCH] common/idpf: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d16364e3bdbfd9e07a487bf776a829c565337e3c ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: 5bf87b45b2c8 ("net/idpf: add AVX512 data path for single queue model")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/common/idpf/idpf_common_rxtx_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index f65e8d512b..5abafc729b 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -1043,6 +1043,7 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_loadu_si512(&txep[copied]);
const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
@@ -1052,6 +1053,12 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_loadu_si512(&txep[copied]);
+ const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.545663870 +0800
+++ 0072-common-idpf-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From d16364e3bdbfd9e07a487bf776a829c565337e3c Mon Sep 17 00:00:00 2001
+From fbbdaf66bbe686f9f864e55f4dc90a95f0b49637 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d16364e3bdbfd9e07a487bf776a829c565337e3c ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 3b5e124ec8..b8450b03ae 100644
+index f65e8d512b..5abafc729b 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve: fix queue setup and stop' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (71 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/idpf: " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
` (47 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Tathagat Priyadarshi; +Cc: xuemingl, Joshua Washington, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=33a52ddb8c7ebab75384199c141ce6097dd23855
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 33a52ddb8c7ebab75384199c141ce6097dd23855 Mon Sep 17 00:00:00 2001
From: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Date: Wed, 31 Jul 2024 05:26:43 +0000
Subject: [PATCH] net/gve: fix queue setup and stop
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7174c8891dcfb2a148e03c5fe2f200742b2dadbe ]
Update the Tx/Rx queue setup/stop routines that are unique to DQO,
so that they may be called for instances that use the DQO RDA format
during dev start/stop
Fixes: b044845bb015 ("net/gve: support queue start/stop")
Signed-off-by: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Acked-by: Joshua Washington <joshwash@google.com>
---
drivers/net/gve/gve_ethdev.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index ecd37ff37f..bd683a64d7 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -140,11 +140,16 @@ gve_start_queues(struct rte_eth_dev *dev)
PMD_DRV_LOG(ERR, "Failed to create %u tx queues.", num_queues);
return ret;
}
- for (i = 0; i < num_queues; i++)
- if (gve_tx_queue_start(dev, i) != 0) {
+ for (i = 0; i < num_queues; i++) {
+ if (gve_is_gqi(priv))
+ ret = gve_tx_queue_start(dev, i);
+ else
+ ret = gve_tx_queue_start_dqo(dev, i);
+ if (ret != 0) {
PMD_DRV_LOG(ERR, "Fail to start Tx queue %d", i);
goto err_tx;
}
+ }
num_queues = dev->data->nb_rx_queues;
priv->rxqs = (struct gve_rx_queue **)dev->data->rx_queues;
@@ -167,9 +172,15 @@ gve_start_queues(struct rte_eth_dev *dev)
return 0;
err_rx:
- gve_stop_rx_queues(dev);
+ if (gve_is_gqi(priv))
+ gve_stop_rx_queues(dev);
+ else
+ gve_stop_rx_queues_dqo(dev);
err_tx:
- gve_stop_tx_queues(dev);
+ if (gve_is_gqi(priv))
+ gve_stop_tx_queues(dev);
+ else
+ gve_stop_tx_queues_dqo(dev);
return ret;
}
@@ -193,10 +204,16 @@ gve_dev_start(struct rte_eth_dev *dev)
static int
gve_dev_stop(struct rte_eth_dev *dev)
{
+ struct gve_priv *priv = dev->data->dev_private;
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
- gve_stop_tx_queues(dev);
- gve_stop_rx_queues(dev);
+ if (gve_is_gqi(priv)) {
+ gve_stop_tx_queues(dev);
+ gve_stop_rx_queues(dev);
+ } else {
+ gve_stop_tx_queues_dqo(dev);
+ gve_stop_rx_queues_dqo(dev);
+ }
dev->data->dev_started = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.585394669 +0800
+++ 0073-net-gve-fix-queue-setup-and-stop.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 7174c8891dcfb2a148e03c5fe2f200742b2dadbe Mon Sep 17 00:00:00 2001
+From 33a52ddb8c7ebab75384199c141ce6097dd23855 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7174c8891dcfb2a148e03c5fe2f200742b2dadbe ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index d202940873..db4ebe7036 100644
+index ecd37ff37f..bd683a64d7 100644
@@ -23 +25 @@
-@@ -288,11 +288,16 @@ gve_start_queues(struct rte_eth_dev *dev)
+@@ -140,11 +140,16 @@ gve_start_queues(struct rte_eth_dev *dev)
@@ -42 +44 @@
-@@ -315,9 +320,15 @@ gve_start_queues(struct rte_eth_dev *dev)
+@@ -167,9 +172,15 @@ gve_start_queues(struct rte_eth_dev *dev)
@@ -60 +62 @@
-@@ -362,10 +373,16 @@ gve_dev_start(struct rte_eth_dev *dev)
+@@ -193,10 +204,16 @@ gve_dev_start(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve: fix Tx for chained mbuf' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (72 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
` (46 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Tathagat Priyadarshi
Cc: xuemingl, Varun Lakkur Ambaji Rao, Joshua Washington, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=32d8e5279beaacefe5e7cf91f4d265ac87667ce6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 32d8e5279beaacefe5e7cf91f4d265ac87667ce6 Mon Sep 17 00:00:00 2001
From: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Date: Fri, 2 Aug 2024 05:08:08 +0000
Subject: [PATCH] net/gve: fix Tx for chained mbuf
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 21b1d725e5a6cd38fe28d83c1f6cf00d80643b31 ]
The EOP and CSUM bit was not set for all the packets in mbuf chain
causing packet transmission stalls for packets split across
mbuf in chain.
Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
Signed-off-by: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Signed-off-by: Varun Lakkur Ambaji Rao <varun.la@gmail.com>
Acked-by: Joshua Washington <joshwash@google.com>
---
.mailmap | 1 +
drivers/net/gve/gve_ethdev.h | 2 ++
drivers/net/gve/gve_tx_dqo.c | 9 ++++++---
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/.mailmap b/.mailmap
index a72dce1a61..5f2593e00e 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1489,6 +1489,7 @@ Vadim Suraev <vadim.suraev@gmail.com>
Vakul Garg <vakul.garg@nxp.com>
Vamsi Attunuru <vattunuru@marvell.com>
Vanshika Shukla <vanshika.shukla@nxp.com>
+Varun Lakkur Ambaji Rao <varun.la@gmail.com>
Varun Sethi <v.sethi@nxp.com>
Vasily Philipov <vasilyf@mellanox.com>
Veerasenareddy Burru <vburru@marvell.com>
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 58d8943e71..133860488c 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -33,6 +33,8 @@
RTE_MBUF_F_TX_L4_MASK | \
RTE_MBUF_F_TX_TCP_SEG)
+#define GVE_TX_CKSUM_OFFLOAD_MASK_DQO (GVE_TX_CKSUM_OFFLOAD_MASK | RTE_MBUF_F_TX_IP_CKSUM)
+
/* A list of pages registered with the device during setup and used by a queue
* as buffers
*/
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index 97d9c6549b..b9d6d01749 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -89,6 +89,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
uint16_t sw_id;
uint64_t bytes;
uint16_t first_sw_id;
+ uint8_t csum;
sw_ring = txq->sw_ring;
txr = txq->tx_ring;
@@ -114,6 +115,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ol_flags = tx_pkt->ol_flags;
nb_used = tx_pkt->nb_segs;
first_sw_id = sw_id;
+
+ csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO);
+
do {
if (sw_ring[sw_id] != NULL)
PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring");
@@ -126,6 +130,8 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO);
+ txd->pkt.end_of_packet = 0;
+ txd->pkt.checksum_offload_enable = csum;
/* size of desc_ring and sw_ring could be different */
tx_id = (tx_id + 1) & mask;
@@ -138,9 +144,6 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* fill the last descriptor with End of Packet (EOP) bit */
txd->pkt.end_of_packet = 1;
- if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK)
- txd->pkt.checksum_offload_enable = 1;
-
txq->nb_free -= nb_used;
txq->nb_used += nb_used;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.630247869 +0800
+++ 0074-net-gve-fix-Tx-for-chained-mbuf.patch 2024-11-11 14:23:05.182192838 +0800
@@ -1 +1 @@
-From 21b1d725e5a6cd38fe28d83c1f6cf00d80643b31 Mon Sep 17 00:00:00 2001
+From 32d8e5279beaacefe5e7cf91f4d265ac87667ce6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 21b1d725e5a6cd38fe28d83c1f6cf00d80643b31 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -17,0 +20 @@
+ drivers/net/gve/gve_ethdev.h | 2 ++
@@ -19 +22 @@
- 2 files changed, 7 insertions(+), 3 deletions(-)
+ 3 files changed, 9 insertions(+), 3 deletions(-)
@@ -22 +25 @@
-index 3abb37673b..995a6f0553 100644
+index a72dce1a61..5f2593e00e 100644
@@ -25 +28,2 @@
-@@ -1553,6 +1553,7 @@ Vakul Garg <vakul.garg@nxp.com>
+@@ -1489,6 +1489,7 @@ Vadim Suraev <vadim.suraev@gmail.com>
+ Vakul Garg <vakul.garg@nxp.com>
@@ -27 +30,0 @@
- Vamsi Krishna Atluri <vamsi.atluri@amd.com>
@@ -32,0 +36,13 @@
+diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
+index 58d8943e71..133860488c 100644
+--- a/drivers/net/gve/gve_ethdev.h
++++ b/drivers/net/gve/gve_ethdev.h
+@@ -33,6 +33,8 @@
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG)
+
++#define GVE_TX_CKSUM_OFFLOAD_MASK_DQO (GVE_TX_CKSUM_OFFLOAD_MASK | RTE_MBUF_F_TX_IP_CKSUM)
++
+ /* A list of pages registered with the device during setup and used by a queue
+ * as buffers
+ */
@@ -34 +50 @@
-index 1b85557a15..b9d6d01749 100644
+index 97d9c6549b..b9d6d01749 100644
@@ -68 +84 @@
-- if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO)
+- if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/tap: avoid memcpy with null argument' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (73 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
` (45 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4014df8b926a9e46faeff7514cfb76f26fb74eb1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4014df8b926a9e46faeff7514cfb76f26fb74eb1 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 13 Aug 2024 19:34:16 -0700
Subject: [PATCH] net/tap: avoid memcpy with null argument
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3975d85fb8606308ccdb6439b35f70e8733a78e8 ]
Calling memcpy with a null pointer even if zero length is
undefined, so check if data_length is zero.
Problem reported by Gcc analyzer.
Fixes: 7c25284e30c2 ("net/tap: add netlink back-end for flow API")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/tap/tap_netlink.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/tap/tap_netlink.c b/drivers/net/tap/tap_netlink.c
index 75af3404b0..c1f7ff56da 100644
--- a/drivers/net/tap/tap_netlink.c
+++ b/drivers/net/tap/tap_netlink.c
@@ -301,7 +301,8 @@ tap_nlattr_add(struct nlmsghdr *nh, unsigned short type,
rta = (struct rtattr *)NLMSG_TAIL(nh);
rta->rta_len = RTA_LENGTH(data_len);
rta->rta_type = type;
- memcpy(RTA_DATA(rta), data, data_len);
+ if (data_len > 0)
+ memcpy(RTA_DATA(rta), data, data_len);
nh->nlmsg_len = NLMSG_ALIGN(nh->nlmsg_len) + RTA_ALIGN(rta->rta_len);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.664473368 +0800
+++ 0075-net-tap-avoid-memcpy-with-null-argument.patch 2024-11-11 14:23:05.182192838 +0800
@@ -1 +1 @@
-From 3975d85fb8606308ccdb6439b35f70e8733a78e8 Mon Sep 17 00:00:00 2001
+From 4014df8b926a9e46faeff7514cfb76f26fb74eb1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3975d85fb8606308ccdb6439b35f70e8733a78e8 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index d9c260127d..35c491ac37 100644
+index 75af3404b0..c1f7ff56da 100644
@@ -23 +25 @@
-@@ -302,7 +302,8 @@ tap_nlattr_add(struct nlmsghdr *nh, unsigned short type,
+@@ -301,7 +301,8 @@ tap_nlattr_add(struct nlmsghdr *nh, unsigned short type,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'app/testpmd: remove unnecessary cast' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (74 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
` (44 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a32bb4a902153541e12f7aa349260344705e6812
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a32bb4a902153541e12f7aa349260344705e6812 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 23 Aug 2024 09:26:01 -0700
Subject: [PATCH] app/testpmd: remove unnecessary cast
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0a3901aa624a690faa49ca081c468320d4edcb7a ]
The list of builtin cmdline commands has unnecessary cast which
blocks compiler type checking.
Fixes: af75078fece3 ("first public release")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 456 ++++++++++++++++++++---------------------
1 file changed, 228 insertions(+), 228 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index d9304e4a32..bf6794ee1d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -13047,240 +13047,240 @@ static cmdline_parse_inst_t cmd_config_tx_affinity_map = {
/* list of instructions */
static cmdline_parse_ctx_t builtin_ctx[] = {
- (cmdline_parse_inst_t *)&cmd_help_brief,
- (cmdline_parse_inst_t *)&cmd_help_long,
- (cmdline_parse_inst_t *)&cmd_quit,
- (cmdline_parse_inst_t *)&cmd_load_from_file,
- (cmdline_parse_inst_t *)&cmd_showport,
- (cmdline_parse_inst_t *)&cmd_showqueue,
- (cmdline_parse_inst_t *)&cmd_showeeprom,
- (cmdline_parse_inst_t *)&cmd_showportall,
- (cmdline_parse_inst_t *)&cmd_representor_info,
- (cmdline_parse_inst_t *)&cmd_showdevice,
- (cmdline_parse_inst_t *)&cmd_showcfg,
- (cmdline_parse_inst_t *)&cmd_showfwdall,
- (cmdline_parse_inst_t *)&cmd_start,
- (cmdline_parse_inst_t *)&cmd_start_tx_first,
- (cmdline_parse_inst_t *)&cmd_start_tx_first_n,
- (cmdline_parse_inst_t *)&cmd_set_link_up,
- (cmdline_parse_inst_t *)&cmd_set_link_down,
- (cmdline_parse_inst_t *)&cmd_reset,
- (cmdline_parse_inst_t *)&cmd_set_numbers,
- (cmdline_parse_inst_t *)&cmd_set_log,
- (cmdline_parse_inst_t *)&cmd_set_rxoffs,
- (cmdline_parse_inst_t *)&cmd_set_rxpkts,
- (cmdline_parse_inst_t *)&cmd_set_rxhdrs,
- (cmdline_parse_inst_t *)&cmd_set_txpkts,
- (cmdline_parse_inst_t *)&cmd_set_txsplit,
- (cmdline_parse_inst_t *)&cmd_set_txtimes,
- (cmdline_parse_inst_t *)&cmd_set_fwd_list,
- (cmdline_parse_inst_t *)&cmd_set_fwd_mask,
- (cmdline_parse_inst_t *)&cmd_set_fwd_mode,
- (cmdline_parse_inst_t *)&cmd_set_fwd_retry_mode,
- (cmdline_parse_inst_t *)&cmd_set_burst_tx_retry,
- (cmdline_parse_inst_t *)&cmd_set_promisc_mode_one,
- (cmdline_parse_inst_t *)&cmd_set_promisc_mode_all,
- (cmdline_parse_inst_t *)&cmd_set_allmulti_mode_one,
- (cmdline_parse_inst_t *)&cmd_set_allmulti_mode_all,
- (cmdline_parse_inst_t *)&cmd_set_flush_rx,
- (cmdline_parse_inst_t *)&cmd_set_link_check,
- (cmdline_parse_inst_t *)&cmd_vlan_offload,
- (cmdline_parse_inst_t *)&cmd_vlan_tpid,
- (cmdline_parse_inst_t *)&cmd_rx_vlan_filter_all,
- (cmdline_parse_inst_t *)&cmd_rx_vlan_filter,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_set,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_set_qinq,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid,
- (cmdline_parse_inst_t *)&cmd_csum_set,
- (cmdline_parse_inst_t *)&cmd_csum_show,
- (cmdline_parse_inst_t *)&cmd_csum_tunnel,
- (cmdline_parse_inst_t *)&cmd_csum_mac_swap,
- (cmdline_parse_inst_t *)&cmd_tso_set,
- (cmdline_parse_inst_t *)&cmd_tso_show,
- (cmdline_parse_inst_t *)&cmd_tunnel_tso_set,
- (cmdline_parse_inst_t *)&cmd_tunnel_tso_show,
+ &cmd_help_brief,
+ &cmd_help_long,
+ &cmd_quit,
+ &cmd_load_from_file,
+ &cmd_showport,
+ &cmd_showqueue,
+ &cmd_showeeprom,
+ &cmd_showportall,
+ &cmd_representor_info,
+ &cmd_showdevice,
+ &cmd_showcfg,
+ &cmd_showfwdall,
+ &cmd_start,
+ &cmd_start_tx_first,
+ &cmd_start_tx_first_n,
+ &cmd_set_link_up,
+ &cmd_set_link_down,
+ &cmd_reset,
+ &cmd_set_numbers,
+ &cmd_set_log,
+ &cmd_set_rxoffs,
+ &cmd_set_rxpkts,
+ &cmd_set_rxhdrs,
+ &cmd_set_txpkts,
+ &cmd_set_txsplit,
+ &cmd_set_txtimes,
+ &cmd_set_fwd_list,
+ &cmd_set_fwd_mask,
+ &cmd_set_fwd_mode,
+ &cmd_set_fwd_retry_mode,
+ &cmd_set_burst_tx_retry,
+ &cmd_set_promisc_mode_one,
+ &cmd_set_promisc_mode_all,
+ &cmd_set_allmulti_mode_one,
+ &cmd_set_allmulti_mode_all,
+ &cmd_set_flush_rx,
+ &cmd_set_link_check,
+ &cmd_vlan_offload,
+ &cmd_vlan_tpid,
+ &cmd_rx_vlan_filter_all,
+ &cmd_rx_vlan_filter,
+ &cmd_tx_vlan_set,
+ &cmd_tx_vlan_set_qinq,
+ &cmd_tx_vlan_reset,
+ &cmd_tx_vlan_set_pvid,
+ &cmd_csum_set,
+ &cmd_csum_show,
+ &cmd_csum_tunnel,
+ &cmd_csum_mac_swap,
+ &cmd_tso_set,
+ &cmd_tso_show,
+ &cmd_tunnel_tso_set,
+ &cmd_tunnel_tso_show,
#ifdef RTE_LIB_GRO
- (cmdline_parse_inst_t *)&cmd_gro_enable,
- (cmdline_parse_inst_t *)&cmd_gro_flush,
- (cmdline_parse_inst_t *)&cmd_gro_show,
+ &cmd_gro_enable,
+ &cmd_gro_flush,
+ &cmd_gro_show,
#endif
#ifdef RTE_LIB_GSO
- (cmdline_parse_inst_t *)&cmd_gso_enable,
- (cmdline_parse_inst_t *)&cmd_gso_size,
- (cmdline_parse_inst_t *)&cmd_gso_show,
+ &cmd_gso_enable,
+ &cmd_gso_size,
+ &cmd_gso_show,
#endif
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_rx,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_tx,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_hw,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_lw,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_pt,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_xon,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_macfwd,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_autoneg,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_show,
- (cmdline_parse_inst_t *)&cmd_priority_flow_control_set,
- (cmdline_parse_inst_t *)&cmd_queue_priority_flow_control_set,
- (cmdline_parse_inst_t *)&cmd_config_dcb,
- (cmdline_parse_inst_t *)&cmd_read_rxd_txd,
- (cmdline_parse_inst_t *)&cmd_stop,
- (cmdline_parse_inst_t *)&cmd_mac_addr,
- (cmdline_parse_inst_t *)&cmd_set_fwd_eth_peer,
- (cmdline_parse_inst_t *)&cmd_set_qmap,
- (cmdline_parse_inst_t *)&cmd_set_xstats_hide_zero,
- (cmdline_parse_inst_t *)&cmd_set_record_core_cycles,
- (cmdline_parse_inst_t *)&cmd_set_record_burst_stats,
- (cmdline_parse_inst_t *)&cmd_operate_port,
- (cmdline_parse_inst_t *)&cmd_operate_specific_port,
- (cmdline_parse_inst_t *)&cmd_operate_attach_port,
- (cmdline_parse_inst_t *)&cmd_operate_detach_port,
- (cmdline_parse_inst_t *)&cmd_operate_detach_device,
- (cmdline_parse_inst_t *)&cmd_set_port_setup_on,
- (cmdline_parse_inst_t *)&cmd_config_speed_all,
- (cmdline_parse_inst_t *)&cmd_config_speed_specific,
- (cmdline_parse_inst_t *)&cmd_config_loopback_all,
- (cmdline_parse_inst_t *)&cmd_config_loopback_specific,
- (cmdline_parse_inst_t *)&cmd_config_rx_tx,
- (cmdline_parse_inst_t *)&cmd_config_mtu,
- (cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
- (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
- (cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
- (cmdline_parse_inst_t *)&cmd_config_rss,
- (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
- (cmdline_parse_inst_t *)&cmd_config_rxtx_queue,
- (cmdline_parse_inst_t *)&cmd_config_deferred_start_rxtx_queue,
- (cmdline_parse_inst_t *)&cmd_setup_rxtx_queue,
- (cmdline_parse_inst_t *)&cmd_config_rss_reta,
- (cmdline_parse_inst_t *)&cmd_showport_reta,
- (cmdline_parse_inst_t *)&cmd_showport_macs,
- (cmdline_parse_inst_t *)&cmd_show_port_flow_transfer_proxy,
- (cmdline_parse_inst_t *)&cmd_config_burst,
- (cmdline_parse_inst_t *)&cmd_config_thresh,
- (cmdline_parse_inst_t *)&cmd_config_threshold,
- (cmdline_parse_inst_t *)&cmd_set_uc_hash_filter,
- (cmdline_parse_inst_t *)&cmd_set_uc_all_hash_filter,
- (cmdline_parse_inst_t *)&cmd_vf_mac_addr_filter,
- (cmdline_parse_inst_t *)&cmd_queue_rate_limit,
- (cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
- (cmdline_parse_inst_t *)&cmd_showport_rss_hash,
- (cmdline_parse_inst_t *)&cmd_showport_rss_hash_key,
- (cmdline_parse_inst_t *)&cmd_showport_rss_hash_algo,
- (cmdline_parse_inst_t *)&cmd_config_rss_hash_key,
- (cmdline_parse_inst_t *)&cmd_cleanup_txq_mbufs,
- (cmdline_parse_inst_t *)&cmd_dump,
- (cmdline_parse_inst_t *)&cmd_dump_one,
- (cmdline_parse_inst_t *)&cmd_flow,
- (cmdline_parse_inst_t *)&cmd_show_port_meter_cap,
- (cmdline_parse_inst_t *)&cmd_add_port_meter_profile_srtcm,
- (cmdline_parse_inst_t *)&cmd_add_port_meter_profile_trtcm,
- (cmdline_parse_inst_t *)&cmd_add_port_meter_profile_trtcm_rfc4115,
- (cmdline_parse_inst_t *)&cmd_del_port_meter_profile,
- (cmdline_parse_inst_t *)&cmd_create_port_meter,
- (cmdline_parse_inst_t *)&cmd_enable_port_meter,
- (cmdline_parse_inst_t *)&cmd_disable_port_meter,
- (cmdline_parse_inst_t *)&cmd_del_port_meter,
- (cmdline_parse_inst_t *)&cmd_del_port_meter_policy,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_profile,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_dscp_table,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_vlan_table,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_in_proto,
- (cmdline_parse_inst_t *)&cmd_get_port_meter_in_proto,
- (cmdline_parse_inst_t *)&cmd_get_port_meter_in_proto_prio,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_stats_mask,
- (cmdline_parse_inst_t *)&cmd_show_port_meter_stats,
- (cmdline_parse_inst_t *)&cmd_mcast_addr,
- (cmdline_parse_inst_t *)&cmd_mcast_addr_flush,
- (cmdline_parse_inst_t *)&cmd_set_vf_vlan_anti_spoof,
- (cmdline_parse_inst_t *)&cmd_set_vf_mac_anti_spoof,
- (cmdline_parse_inst_t *)&cmd_set_vf_vlan_stripq,
- (cmdline_parse_inst_t *)&cmd_set_vf_vlan_insert,
- (cmdline_parse_inst_t *)&cmd_set_tx_loopback,
- (cmdline_parse_inst_t *)&cmd_set_all_queues_drop_en,
- (cmdline_parse_inst_t *)&cmd_set_vf_traffic,
- (cmdline_parse_inst_t *)&cmd_set_vf_rxmode,
- (cmdline_parse_inst_t *)&cmd_vf_rate_limit,
- (cmdline_parse_inst_t *)&cmd_vf_rxvlan_filter,
- (cmdline_parse_inst_t *)&cmd_set_vf_mac_addr,
- (cmdline_parse_inst_t *)&cmd_set_vxlan,
- (cmdline_parse_inst_t *)&cmd_set_vxlan_tos_ttl,
- (cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_nvgre,
- (cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_l2_encap,
- (cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_l2_decap,
- (cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_encap,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_encap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_decap,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_decap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
- (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
- (cmdline_parse_inst_t *)&cmd_show_vf_stats,
- (cmdline_parse_inst_t *)&cmd_clear_vf_stats,
- (cmdline_parse_inst_t *)&cmd_show_port_supported_ptypes,
- (cmdline_parse_inst_t *)&cmd_set_port_ptypes,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_cap,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_level_cap,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_node_cap,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_node_type,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_node_stats,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_node_shaper_profile,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node_shaper_profile,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_node_shared_shaper,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node_shared_shaper,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_node_wred_profile,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node_wred_profile,
- (cmdline_parse_inst_t *)&cmd_set_port_tm_node_shaper_profile,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node_pmode,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_leaf_node,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node,
- (cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
- (cmdline_parse_inst_t *)&cmd_suspend_port_tm_node,
- (cmdline_parse_inst_t *)&cmd_resume_port_tm_node,
- (cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
- (cmdline_parse_inst_t *)&cmd_port_tm_mark_ip_ecn,
- (cmdline_parse_inst_t *)&cmd_port_tm_mark_ip_dscp,
- (cmdline_parse_inst_t *)&cmd_port_tm_mark_vlan_dei,
- (cmdline_parse_inst_t *)&cmd_cfg_tunnel_udp_port,
- (cmdline_parse_inst_t *)&cmd_rx_offload_get_capa,
- (cmdline_parse_inst_t *)&cmd_rx_offload_get_configuration,
- (cmdline_parse_inst_t *)&cmd_config_per_port_rx_offload,
- (cmdline_parse_inst_t *)&cmd_config_all_port_rx_offload,
- (cmdline_parse_inst_t *)&cmd_config_per_queue_rx_offload,
- (cmdline_parse_inst_t *)&cmd_tx_offload_get_capa,
- (cmdline_parse_inst_t *)&cmd_tx_offload_get_configuration,
- (cmdline_parse_inst_t *)&cmd_config_per_port_tx_offload,
- (cmdline_parse_inst_t *)&cmd_config_all_port_tx_offload,
- (cmdline_parse_inst_t *)&cmd_config_per_queue_tx_offload,
+ &cmd_link_flow_control_set,
+ &cmd_link_flow_control_set_rx,
+ &cmd_link_flow_control_set_tx,
+ &cmd_link_flow_control_set_hw,
+ &cmd_link_flow_control_set_lw,
+ &cmd_link_flow_control_set_pt,
+ &cmd_link_flow_control_set_xon,
+ &cmd_link_flow_control_set_macfwd,
+ &cmd_link_flow_control_set_autoneg,
+ &cmd_link_flow_control_show,
+ &cmd_priority_flow_control_set,
+ &cmd_queue_priority_flow_control_set,
+ &cmd_config_dcb,
+ &cmd_read_rxd_txd,
+ &cmd_stop,
+ &cmd_mac_addr,
+ &cmd_set_fwd_eth_peer,
+ &cmd_set_qmap,
+ &cmd_set_xstats_hide_zero,
+ &cmd_set_record_core_cycles,
+ &cmd_set_record_burst_stats,
+ &cmd_operate_port,
+ &cmd_operate_specific_port,
+ &cmd_operate_attach_port,
+ &cmd_operate_detach_port,
+ &cmd_operate_detach_device,
+ &cmd_set_port_setup_on,
+ &cmd_config_speed_all,
+ &cmd_config_speed_specific,
+ &cmd_config_loopback_all,
+ &cmd_config_loopback_specific,
+ &cmd_config_rx_tx,
+ &cmd_config_mtu,
+ &cmd_config_max_pkt_len,
+ &cmd_config_max_lro_pkt_size,
+ &cmd_config_rx_mode_flag,
+ &cmd_config_rss,
+ &cmd_config_rxtx_ring_size,
+ &cmd_config_rxtx_queue,
+ &cmd_config_deferred_start_rxtx_queue,
+ &cmd_setup_rxtx_queue,
+ &cmd_config_rss_reta,
+ &cmd_showport_reta,
+ &cmd_showport_macs,
+ &cmd_show_port_flow_transfer_proxy,
+ &cmd_config_burst,
+ &cmd_config_thresh,
+ &cmd_config_threshold,
+ &cmd_set_uc_hash_filter,
+ &cmd_set_uc_all_hash_filter,
+ &cmd_vf_mac_addr_filter,
+ &cmd_queue_rate_limit,
+ &cmd_tunnel_udp_config,
+ &cmd_showport_rss_hash,
+ &cmd_showport_rss_hash_key,
+ &cmd_showport_rss_hash_algo,
+ &cmd_config_rss_hash_key,
+ &cmd_cleanup_txq_mbufs,
+ &cmd_dump,
+ &cmd_dump_one,
+ &cmd_flow,
+ &cmd_show_port_meter_cap,
+ &cmd_add_port_meter_profile_srtcm,
+ &cmd_add_port_meter_profile_trtcm,
+ &cmd_add_port_meter_profile_trtcm_rfc4115,
+ &cmd_del_port_meter_profile,
+ &cmd_create_port_meter,
+ &cmd_enable_port_meter,
+ &cmd_disable_port_meter,
+ &cmd_del_port_meter,
+ &cmd_del_port_meter_policy,
+ &cmd_set_port_meter_profile,
+ &cmd_set_port_meter_dscp_table,
+ &cmd_set_port_meter_vlan_table,
+ &cmd_set_port_meter_in_proto,
+ &cmd_get_port_meter_in_proto,
+ &cmd_get_port_meter_in_proto_prio,
+ &cmd_set_port_meter_stats_mask,
+ &cmd_show_port_meter_stats,
+ &cmd_mcast_addr,
+ &cmd_mcast_addr_flush,
+ &cmd_set_vf_vlan_anti_spoof,
+ &cmd_set_vf_mac_anti_spoof,
+ &cmd_set_vf_vlan_stripq,
+ &cmd_set_vf_vlan_insert,
+ &cmd_set_tx_loopback,
+ &cmd_set_all_queues_drop_en,
+ &cmd_set_vf_traffic,
+ &cmd_set_vf_rxmode,
+ &cmd_vf_rate_limit,
+ &cmd_vf_rxvlan_filter,
+ &cmd_set_vf_mac_addr,
+ &cmd_set_vxlan,
+ &cmd_set_vxlan_tos_ttl,
+ &cmd_set_vxlan_with_vlan,
+ &cmd_set_nvgre,
+ &cmd_set_nvgre_with_vlan,
+ &cmd_set_l2_encap,
+ &cmd_set_l2_encap_with_vlan,
+ &cmd_set_l2_decap,
+ &cmd_set_l2_decap_with_vlan,
+ &cmd_set_mplsogre_encap,
+ &cmd_set_mplsogre_encap_with_vlan,
+ &cmd_set_mplsogre_decap,
+ &cmd_set_mplsogre_decap_with_vlan,
+ &cmd_set_mplsoudp_encap,
+ &cmd_set_mplsoudp_encap_with_vlan,
+ &cmd_set_mplsoudp_decap,
+ &cmd_set_mplsoudp_decap_with_vlan,
+ &cmd_set_conntrack_common,
+ &cmd_set_conntrack_dir,
+ &cmd_show_vf_stats,
+ &cmd_clear_vf_stats,
+ &cmd_show_port_supported_ptypes,
+ &cmd_set_port_ptypes,
+ &cmd_show_port_tm_cap,
+ &cmd_show_port_tm_level_cap,
+ &cmd_show_port_tm_node_cap,
+ &cmd_show_port_tm_node_type,
+ &cmd_show_port_tm_node_stats,
+ &cmd_add_port_tm_node_shaper_profile,
+ &cmd_del_port_tm_node_shaper_profile,
+ &cmd_add_port_tm_node_shared_shaper,
+ &cmd_del_port_tm_node_shared_shaper,
+ &cmd_add_port_tm_node_wred_profile,
+ &cmd_del_port_tm_node_wred_profile,
+ &cmd_set_port_tm_node_shaper_profile,
+ &cmd_add_port_tm_nonleaf_node,
+ &cmd_add_port_tm_nonleaf_node_pmode,
+ &cmd_add_port_tm_leaf_node,
+ &cmd_del_port_tm_node,
+ &cmd_set_port_tm_node_parent,
+ &cmd_suspend_port_tm_node,
+ &cmd_resume_port_tm_node,
+ &cmd_port_tm_hierarchy_commit,
+ &cmd_port_tm_mark_ip_ecn,
+ &cmd_port_tm_mark_ip_dscp,
+ &cmd_port_tm_mark_vlan_dei,
+ &cmd_cfg_tunnel_udp_port,
+ &cmd_rx_offload_get_capa,
+ &cmd_rx_offload_get_configuration,
+ &cmd_config_per_port_rx_offload,
+ &cmd_config_all_port_rx_offload,
+ &cmd_config_per_queue_rx_offload,
+ &cmd_tx_offload_get_capa,
+ &cmd_tx_offload_get_configuration,
+ &cmd_config_per_port_tx_offload,
+ &cmd_config_all_port_tx_offload,
+ &cmd_config_per_queue_tx_offload,
#ifdef RTE_LIB_BPF
- (cmdline_parse_inst_t *)&cmd_operate_bpf_ld_parse,
- (cmdline_parse_inst_t *)&cmd_operate_bpf_unld_parse,
+ &cmd_operate_bpf_ld_parse,
+ &cmd_operate_bpf_unld_parse,
#endif
- (cmdline_parse_inst_t *)&cmd_config_tx_metadata_specific,
- (cmdline_parse_inst_t *)&cmd_show_tx_metadata,
- (cmdline_parse_inst_t *)&cmd_show_rx_tx_desc_status,
- (cmdline_parse_inst_t *)&cmd_show_rx_queue_desc_used_count,
- (cmdline_parse_inst_t *)&cmd_set_raw,
- (cmdline_parse_inst_t *)&cmd_show_set_raw,
- (cmdline_parse_inst_t *)&cmd_show_set_raw_all,
- (cmdline_parse_inst_t *)&cmd_config_tx_dynf_specific,
- (cmdline_parse_inst_t *)&cmd_show_fec_mode,
- (cmdline_parse_inst_t *)&cmd_set_fec_mode,
- (cmdline_parse_inst_t *)&cmd_set_rxq_avail_thresh,
- (cmdline_parse_inst_t *)&cmd_show_capability,
- (cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
- (cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
- (cmdline_parse_inst_t *)&cmd_show_port_cman_capa,
- (cmdline_parse_inst_t *)&cmd_show_port_cman_config,
- (cmdline_parse_inst_t *)&cmd_set_port_cman_config,
- (cmdline_parse_inst_t *)&cmd_config_tx_affinity_map,
+ &cmd_config_tx_metadata_specific,
+ &cmd_show_tx_metadata,
+ &cmd_show_rx_tx_desc_status,
+ &cmd_show_rx_queue_desc_used_count,
+ &cmd_set_raw,
+ &cmd_show_set_raw,
+ &cmd_show_set_raw_all,
+ &cmd_config_tx_dynf_specific,
+ &cmd_show_fec_mode,
+ &cmd_set_fec_mode,
+ &cmd_set_rxq_avail_thresh,
+ &cmd_show_capability,
+ &cmd_set_flex_is_pattern,
+ &cmd_set_flex_spec_pattern,
+ &cmd_show_port_cman_capa,
+ &cmd_show_port_cman_config,
+ &cmd_set_port_cman_config,
+ &cmd_config_tx_affinity_map,
NULL,
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.700532467 +0800
+++ 0076-app-testpmd-remove-unnecessary-cast.patch 2024-11-11 14:23:05.192192838 +0800
@@ -1 +1 @@
-From 0a3901aa624a690faa49ca081c468320d4edcb7a Mon Sep 17 00:00:00 2001
+From a32bb4a902153541e12f7aa349260344705e6812 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0a3901aa624a690faa49ca081c468320d4edcb7a ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -15,2 +17,2 @@
- app/test-pmd/cmdline.c | 458 ++++++++++++++++++++---------------------
- 1 file changed, 229 insertions(+), 229 deletions(-)
+ app/test-pmd/cmdline.c | 456 ++++++++++++++++++++---------------------
+ 1 file changed, 228 insertions(+), 228 deletions(-)
@@ -19 +21 @@
-index b7759e38a8..358319c20a 100644
+index d9304e4a32..bf6794ee1d 100644
@@ -22 +24 @@
-@@ -13146,241 +13146,241 @@ static cmdline_parse_inst_t cmd_config_tx_affinity_map = {
+@@ -13047,240 +13047,240 @@ static cmdline_parse_inst_t cmd_config_tx_affinity_map = {
@@ -205 +206,0 @@
-- (cmdline_parse_inst_t *)&cmd_config_rss_hash_algo,
@@ -355 +355,0 @@
-+ &cmd_config_rss_hash_algo,
@@ -457 +457 @@
-- (cmdline_parse_inst_t *)&cmd_show_rx_tx_queue_desc_used_count,
+- (cmdline_parse_inst_t *)&cmd_show_rx_queue_desc_used_count,
@@ -475 +475 @@
-+ &cmd_show_rx_tx_queue_desc_used_count,
++ &cmd_show_rx_queue_desc_used_count,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/pcap: set live interface as non-blocking' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (75 preceding siblings ...)
2024-11-11 6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
` (43 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ofer Dagan, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=84123ec794ad3a8c2796fe344030ce8fb9c32e67
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 84123ec794ad3a8c2796fe344030ce8fb9c32e67 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Sat, 24 Aug 2024 11:07:10 -0700
Subject: [PATCH] net/pcap: set live interface as non-blocking
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 60dd5a70035f447104d457aa338557fb58d5cb06 ]
The DPDK PMD's are supposed to be non-blocking and poll for packets.
Configure PCAP to do this on live interface.
Bugzilla ID: 1526
Fixes: 4c173302c307 ("pcap: add new driver")
Reported-by: Ofer Dagan <ofer.d@claroty.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/pcap/pcap_ethdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index 9626c343dc..1fb98e3d2b 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -522,6 +522,12 @@ open_iface_live(const char *iface, pcap_t **pcap) {
return -1;
}
+ if (pcap_setnonblock(*pcap, 1, errbuf)) {
+ PMD_LOG(ERR, "Couldn't set non-blocking on %s: %s", iface, errbuf);
+ pcap_close(*pcap);
+ return -1;
+ }
+
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.754897266 +0800
+++ 0077-net-pcap-set-live-interface-as-non-blocking.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From 60dd5a70035f447104d457aa338557fb58d5cb06 Mon Sep 17 00:00:00 2001
+From 84123ec794ad3a8c2796fe344030ce8fb9c32e67 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 60dd5a70035f447104d457aa338557fb58d5cb06 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mana: support rdma-core via pkg-config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (76 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
` (42 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Shreesh Adiga; +Cc: xuemingl, Long Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9477687ed2ef8e7c2588397a9542fc5ccfc1df40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9477687ed2ef8e7c2588397a9542fc5ccfc1df40 Mon Sep 17 00:00:00 2001
From: Shreesh Adiga <16567adigashreesh@gmail.com>
Date: Fri, 20 Sep 2024 16:41:16 +0530
Subject: [PATCH] net/mana: support rdma-core via pkg-config
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8d7596cad7abb413c25f6782fe62fd0d388b8b94 ]
Currently building with custom rdma-core installed in /opt/rdma-core
after setting PKG_CONFIG_PATH=/opt/rdma-core/lib64/pkgconfig/ results
in the below meson logs:
Run-time dependency libmana found: YES 1.0.54.0
Header "infiniband/manadv.h" has symbol "manadv_set_context_attr" : NO
Thus to fix this, the libs is updated in meson.build and is passed to
the cc.has_header_symbol call using dependencies. After this change,
the libmana header files are getting included and net/mana is
successfully enabled.
Fixes: 517ed6e2d590 ("net/mana: add basic driver with build environment")
Signed-off-by: Shreesh Adiga <16567adigashreesh@gmail.com>
Acked-by: Long Li <longli@microsoft.com>
---
drivers/net/mana/meson.build | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mana/meson.build b/drivers/net/mana/meson.build
index 2d72eca5a8..3ddc230ab4 100644
--- a/drivers/net/mana/meson.build
+++ b/drivers/net/mana/meson.build
@@ -19,12 +19,14 @@ sources += files(
)
libnames = ['ibverbs', 'mana']
+libs = []
foreach libname:libnames
lib = dependency('lib' + libname, required:false)
if not lib.found()
lib = cc.find_library(libname, required:false)
endif
if lib.found()
+ libs += lib
ext_deps += lib
else
build = false
@@ -43,7 +45,7 @@ required_symbols = [
]
foreach arg:required_symbols
- if not cc.has_header_symbol(arg[0], arg[1])
+ if not cc.has_header_symbol(arg[0], arg[1], dependencies: libs, args: cflags)
build = false
reason = 'missing symbol "' + arg[1] + '" in "' + arg[0] + '"'
subdir_done()
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.786021665 +0800
+++ 0078-net-mana-support-rdma-core-via-pkg-config.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From 8d7596cad7abb413c25f6782fe62fd0d388b8b94 Mon Sep 17 00:00:00 2001
+From 9477687ed2ef8e7c2588397a9542fc5ccfc1df40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8d7596cad7abb413c25f6782fe62fd0d388b8b94 ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 330d30b2ff..4d163fc0f2 100644
+index 2d72eca5a8..3ddc230ab4 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ena: revert redefining memcpy' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (77 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
` (41 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Wathsala Vithanage, Morten Brørup, Tyler Retzlaff,
Shai Brandes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ac6feb7694793f1d084c443a5788725e0bb9630e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ac6feb7694793f1d084c443a5788725e0bb9630e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Mon, 12 Aug 2024 08:34:17 -0700
Subject: [PATCH] net/ena: revert redefining memcpy
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 966764d003554b38e892cf18df9e9af44483036d ]
Redefining memcpy as rte_memcpy has no performance gain on
current compilers, and introduced bugs like this one where
rte_memcpy() will be detected as referencing past the destination.
Bugzilla ID: 1510
Fixes: 142778b3702a ("net/ena: switch memcpy to optimized version")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Shai Brandes <shaibran@amazon.com>
---
drivers/net/ena/base/ena_plat_dpdk.h | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h
index 665ac2f0cc..ba4a525898 100644
--- a/drivers/net/ena/base/ena_plat_dpdk.h
+++ b/drivers/net/ena/base/ena_plat_dpdk.h
@@ -26,7 +26,6 @@
#include <rte_spinlock.h>
#include <sys/time.h>
-#include <rte_memcpy.h>
typedef uint64_t u64;
typedef uint32_t u32;
@@ -70,14 +69,7 @@ typedef uint64_t dma_addr_t;
#define ENA_UDELAY(x) rte_delay_us_block(x)
#define ENA_TOUCH(x) ((void)(x))
-/* Redefine memcpy with caution: rte_memcpy can be simply aliased to memcpy, so
- * make the redefinition only if it's safe (and beneficial) to do so.
- */
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64_MEMCPY) || \
- defined(RTE_ARCH_ARM_NEON_MEMCPY)
-#undef memcpy
-#define memcpy rte_memcpy
-#endif
+
#define wmb rte_wmb
#define rmb rte_rmb
#define mb rte_mb
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.821629465 +0800
+++ 0079-net-ena-revert-redefining-memcpy.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From 966764d003554b38e892cf18df9e9af44483036d Mon Sep 17 00:00:00 2001
+From ac6feb7694793f1d084c443a5788725e0bb9630e Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 966764d003554b38e892cf18df9e9af44483036d ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -23,2 +25,2 @@
- drivers/net/ena/base/ena_plat_dpdk.h | 9 +--------
- 1 file changed, 1 insertion(+), 8 deletions(-)
+ drivers/net/ena/base/ena_plat_dpdk.h | 10 +---------
+ 1 file changed, 1 insertion(+), 9 deletions(-)
@@ -27 +29 @@
-index a41a4e4506..1121460470 100644
+index 665ac2f0cc..ba4a525898 100644
@@ -38 +40 @@
-@@ -68,13 +67,7 @@ typedef uint64_t dma_addr_t;
+@@ -70,14 +69,7 @@ typedef uint64_t dma_addr_t;
@@ -45 +47,2 @@
--#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64_MEMCPY) || defined(RTE_ARCH_ARM_NEON_MEMCPY)
+-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64_MEMCPY) || \
+- defined(RTE_ARCH_ARM_NEON_MEMCPY)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/hns3: remove some basic address dump' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (78 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
` (40 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Jie Hai; +Cc: xuemingl, Huisong Li, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c8c9ae34ea6ded0a1b2b1ad43b98f567996352a8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c8c9ae34ea6ded0a1b2b1ad43b98f567996352a8 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 26 Sep 2024 20:42:44 +0800
Subject: [PATCH] net/hns3: remove some basic address dump
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c8b7bec0ef23f53303c9cf03cfea44f1eb208738 ]
For security reasons, some address registers are not suitable
to be exposed, remove them.
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_regs.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index 955bc7e3af..8793f61153 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -17,13 +17,9 @@
static int hns3_get_dfx_reg_line(struct hns3_hw *hw, uint32_t *lines);
-static const uint32_t cmdq_reg_addrs[] = {HNS3_CMDQ_TX_ADDR_L_REG,
- HNS3_CMDQ_TX_ADDR_H_REG,
- HNS3_CMDQ_TX_DEPTH_REG,
+static const uint32_t cmdq_reg_addrs[] = {HNS3_CMDQ_TX_DEPTH_REG,
HNS3_CMDQ_TX_TAIL_REG,
HNS3_CMDQ_TX_HEAD_REG,
- HNS3_CMDQ_RX_ADDR_L_REG,
- HNS3_CMDQ_RX_ADDR_H_REG,
HNS3_CMDQ_RX_DEPTH_REG,
HNS3_CMDQ_RX_TAIL_REG,
HNS3_CMDQ_RX_HEAD_REG,
@@ -44,9 +40,7 @@ static const uint32_t common_vf_reg_addrs[] = {HNS3_MISC_VECTOR_REG_BASE,
HNS3_FUN_RST_ING,
HNS3_GRO_EN_REG};
-static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BASEADDR_L_REG,
- HNS3_RING_RX_BASEADDR_H_REG,
- HNS3_RING_RX_BD_NUM_REG,
+static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BD_NUM_REG,
HNS3_RING_RX_BD_LEN_REG,
HNS3_RING_RX_EN_REG,
HNS3_RING_RX_MERGE_EN_REG,
@@ -57,8 +51,6 @@ static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BASEADDR_L_REG,
HNS3_RING_RX_FBD_OFFSET_REG,
HNS3_RING_RX_STASH_REG,
HNS3_RING_RX_BD_ERR_REG,
- HNS3_RING_TX_BASEADDR_L_REG,
- HNS3_RING_TX_BASEADDR_H_REG,
HNS3_RING_TX_BD_NUM_REG,
HNS3_RING_TX_EN_REG,
HNS3_RING_TX_PRIORITY_REG,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.855073064 +0800
+++ 0080-net-hns3-remove-some-basic-address-dump.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From c8b7bec0ef23f53303c9cf03cfea44f1eb208738 Mon Sep 17 00:00:00 2001
+From c8c9ae34ea6ded0a1b2b1ad43b98f567996352a8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c8b7bec0ef23f53303c9cf03cfea44f1eb208738 ]
@@ -8,2 +10,0 @@
-
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/hns3: fix dump counter of registers' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (79 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
` (39 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Jie Hai; +Cc: xuemingl, Huisong Li, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0feee3f5727cb2f849337b091252c6e67b100ba5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0feee3f5727cb2f849337b091252c6e67b100ba5 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 26 Sep 2024 20:42:45 +0800
Subject: [PATCH] net/hns3: fix dump counter of registers
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e9b82b4d54c019973ffcb5f404ba920494f70513 ]
Since the driver dumps the queue interrupt registers according
to the intr_tqps_num, the counter should be the same.
Fixes: acb3260fac5c ("net/hns3: fix dump register out of range")
Fixes: 936eda25e8da ("net/hns3: support dump register")
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_regs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index 8793f61153..8c0c0a3027 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -127,7 +127,7 @@ hns3_get_regs_length(struct hns3_hw *hw, uint32_t *length)
tqp_intr_lines = sizeof(tqp_intr_reg_addrs) / REG_LEN_PER_LINE + 1;
len = (cmdq_lines + common_lines + ring_lines * hw->tqps_num +
- tqp_intr_lines * hw->num_msi) * REG_NUM_PER_LINE;
+ tqp_intr_lines * hw->intr_tqps_num) * REG_NUM_PER_LINE;
if (!hns->is_vf) {
ret = hns3_get_regs_num(hw, ®s_num_32_bit, ®s_num_64_bit);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.892376163 +0800
+++ 0081-net-hns3-fix-dump-counter-of-registers.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From e9b82b4d54c019973ffcb5f404ba920494f70513 Mon Sep 17 00:00:00 2001
+From 0feee3f5727cb2f849337b091252c6e67b100ba5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e9b82b4d54c019973ffcb5f404ba920494f70513 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'ethdev: fix overflow in descriptor count' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (80 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
` (38 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Niall Meade; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c795236ed2bc295e4ff8c8d8eafcf81f2b96c6dd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c795236ed2bc295e4ff8c8d8eafcf81f2b96c6dd Mon Sep 17 00:00:00 2001
From: Niall Meade <niall.meade@intel.com>
Date: Mon, 30 Sep 2024 13:40:02 +0000
Subject: [PATCH] ethdev: fix overflow in descriptor count
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 30efe60d3a37896567b660229ef6a04c5526f6db ]
Addressed a specific overflow issue in the eth_dev_adjust_nb_desc()
function where the uint16_t variable nb_desc would overflow when its
value was greater than (2^16 - nb_align). This overflow caused nb_desc
to incorrectly wrap around between 0 and nb_align-1, leading to the
function setting nb_desc to nb_min instead of the expected nb_max.
To give an example, let nb_desc=UINT16_MAX, nb_align=32, nb_max=4096 and
nb_min=64. RTE_ALIGN_CEIL(nb_desc, nb_align) calls
RTE_ALIGN_FLOOR(nb_desc + nb_align - 1, nb_align). This results in an
overflow of nb_desc, leading to nb_desc being set to 30 and then 0 when
the macros return. As a result of this, nb_desc is then set to nb_min
later on.
The resolution involves upcasting nb_desc to a uint32_t before the
RTE_ALIGN_CEIL macro is applied. This change ensures that the subsequent
call to RTE_ALIGN_FLOOR(nb_desc + (nb_align - 1), nb_align) does not
result in an overflow, as it would when nb_desc is a uint16_t. By using
a uint32_t for these operations, the correct behavior is maintained
without the risk of overflow.
Fixes: 0f67fc3baeb9 ("ethdev: add function to adjust number of descriptors")
Signed-off-by: Niall Meade <niall.meade@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
.mailmap | 1 +
lib/ethdev/rte_ethdev.c | 12 +++++++++---
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/.mailmap b/.mailmap
index 5f2593e00e..674384cfc5 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1029,6 +1029,7 @@ Nelson Escobar <neescoba@cisco.com>
Nemanja Marjanovic <nemanja.marjanovic@intel.com>
Netanel Belgazal <netanel@amazon.com>
Netanel Gonen <netanelg@mellanox.com>
+Niall Meade <niall.meade@intel.com>
Niall Power <niall.power@intel.com>
Nicholas Pratte <npratte@iol.unh.edu>
Nick Connolly <nick.connolly@arm.com> <nick.connolly@mayadata.io>
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index b9d99ece15..1f067873a9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6556,13 +6556,19 @@ static void
eth_dev_adjust_nb_desc(uint16_t *nb_desc,
const struct rte_eth_desc_lim *desc_lim)
{
+ /* Upcast to uint32 to avoid potential overflow with RTE_ALIGN_CEIL(). */
+ uint32_t nb_desc_32 = (uint32_t)*nb_desc;
+
if (desc_lim->nb_align != 0)
- *nb_desc = RTE_ALIGN_CEIL(*nb_desc, desc_lim->nb_align);
+ nb_desc_32 = RTE_ALIGN_CEIL(nb_desc_32, desc_lim->nb_align);
if (desc_lim->nb_max != 0)
- *nb_desc = RTE_MIN(*nb_desc, desc_lim->nb_max);
+ nb_desc_32 = RTE_MIN(nb_desc_32, desc_lim->nb_max);
+
+ nb_desc_32 = RTE_MAX(nb_desc_32, desc_lim->nb_min);
- *nb_desc = RTE_MAX(*nb_desc, desc_lim->nb_min);
+ /* Assign clipped u32 back to u16. */
+ *nb_desc = (uint16_t)nb_desc_32;
}
int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.921019263 +0800
+++ 0082-ethdev-fix-overflow-in-descriptor-count.patch 2024-11-11 14:23:05.212192838 +0800
@@ -1 +1 @@
-From 30efe60d3a37896567b660229ef6a04c5526f6db Mon Sep 17 00:00:00 2001
+From c795236ed2bc295e4ff8c8d8eafcf81f2b96c6dd Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 30efe60d3a37896567b660229ef6a04c5526f6db ]
@@ -27 +29,0 @@
-Cc: stable@dpdk.org
@@ -37 +39 @@
-index d49838320e..6e72362ebc 100644
+index 5f2593e00e..674384cfc5 100644
@@ -40 +42 @@
-@@ -1070,6 +1070,7 @@ Nelson Escobar <neescoba@cisco.com>
+@@ -1029,6 +1029,7 @@ Nelson Escobar <neescoba@cisco.com>
@@ -49 +51 @@
-index a1f7efa913..84ee7588fc 100644
+index b9d99ece15..1f067873a9 100644
@@ -52 +54 @@
-@@ -6667,13 +6667,19 @@ static void
+@@ -6556,13 +6556,19 @@ static void
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (81 preceding siblings ...)
2024-11-11 6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
` (37 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Gagandeep Singh; +Cc: xuemingl, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=508ee4007a756e0eea944d532e3edb31135b31d9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 508ee4007a756e0eea944d532e3edb31135b31d9 Mon Sep 17 00:00:00 2001
From: Gagandeep Singh <g.singh@nxp.com>
Date: Tue, 1 Oct 2024 16:33:08 +0530
Subject: [PATCH] bus/dpaa: fix PFDRs leaks due to FQRNIs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b292acc3c4a8fd5104cfdfa5c6d3d0df95b6543b ]
When a Retire FQ command is executed on a FQ in the
Tentatively Scheduled or Parked states, in that case FQ
is retired immediately and a FQRNI (Frame Queue Retirement
Notification Immediate) message is generated. Software
must read this message from MR and consume it to free
the memory used by it.
Although it is not mentioned about which memory to be used
by FQRNIs in the RM but through experiments it is proven
that it can use PFDRs. So if these messages are allowed to
build up indefinitely then PFDR resources can become exhausted
and cause enqueues to stall. Therefore software must consume
these MR messages on a regular basis to avoid depleting
the available PFDR resources.
This is the PFDRs leak issue which user can experience while
using the DPDK crypto driver and creating and destroying the
sessions multiple times. On a session destroy, DPDK calls the
qman_retire_fq() for each FQ used by the session, but it does
not handle the FQRNIs generated and allowed them to build up
indefinitely in MR.
This patch fixes this issue by consuming the FQRNIs received
from MR immediately after FQ retire by calling drain_mr_fqrni().
Please note that this drain_mr_fqrni() only look for
FQRNI type messages to consume. If there are other type of messages
like FQRN, FQRL, FQPN, ERN etc. also coming on MR then those
messages need to be handled separately.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 46 ++++++++++++++++--------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 83db0a534e..f06992ca48 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -294,10 +294,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
qm_dqrr_set_maxfill(&p->p, 0);
}
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+ register struct qm_mr *mr = &portal->mr;
+ const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+ DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+#endif
+ /* when accessing 'verb', use __raw_readb() to ensure that compiler
+ * inlining doesn't try to optimise out "excess reads".
+ */
+ if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+ mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+ if (!mr->pi)
+ mr->vbit ^= QM_MR_VERB_VBIT;
+ mr->fill++;
+ res = MR_INC(res);
+ }
+ dcbit_ro(res);
+}
+
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
loop:
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg) {
/*
@@ -319,6 +341,7 @@ loop:
do {
now = mfatb();
} while ((then + 10000) > now);
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg)
return 0;
@@ -481,27 +504,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
return 0;
}
-static inline void qm_mr_pvb_update(struct qm_portal *portal)
-{
- register struct qm_mr *mr = &portal->mr;
- const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- DPAA_ASSERT(mr->pmode == qm_mr_pvb);
-#endif
- /* when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
- mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
- if (!mr->pi)
- mr->vbit ^= QM_MR_VERB_VBIT;
- mr->fill++;
- res = MR_INC(res);
- }
- dcbit_ro(res);
-}
-
struct qman_portal *
qman_init_portal(struct qman_portal *portal,
const struct qm_portal_config *c,
@@ -1825,6 +1827,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
}
out:
FQUNLOCK(fq);
+ /* Draining FQRNIs, if any */
+ drain_mr_fqrni(&p->p);
return rval;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.964983062 +0800
+++ 0083-bus-dpaa-fix-PFDRs-leaks-due-to-FQRNIs.patch 2024-11-11 14:23:05.212192838 +0800
@@ -1 +1 @@
-From b292acc3c4a8fd5104cfdfa5c6d3d0df95b6543b Mon Sep 17 00:00:00 2001
+From 508ee4007a756e0eea944d532e3edb31135b31d9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b292acc3c4a8fd5104cfdfa5c6d3d0df95b6543b ]
@@ -37 +39,0 @@
-Cc: stable@dpdk.org
@@ -46 +48 @@
-index 301057723e..9c90ee25a6 100644
+index 83db0a534e..f06992ca48 100644
@@ -49 +51 @@
-@@ -292,10 +292,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+@@ -294,10 +294,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
@@ -82 +84 @@
-@@ -317,6 +339,7 @@ loop:
+@@ -319,6 +341,7 @@ loop:
@@ -90 +92 @@
-@@ -479,27 +502,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
+@@ -481,27 +504,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
@@ -118 +120 @@
-@@ -1794,6 +1796,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+@@ -1825,6 +1827,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/dpaa: fix typecasting channel ID' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (82 preceding siblings ...)
2024-11-11 6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
` (36 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Rohit Raj; +Cc: xuemingl, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9a1711950ed89b16332f4cc6e378de7269815ff6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9a1711950ed89b16332f4cc6e378de7269815ff6 Mon Sep 17 00:00:00 2001
From: Rohit Raj <rohit.raj@nxp.com>
Date: Tue, 1 Oct 2024 16:33:09 +0530
Subject: [PATCH] net/dpaa: fix typecasting channel ID
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5edc61ee9a2c1e1d9c8b75faac4b61de7111c34e ]
Avoid typecasting ch_id to u32 and passing it to another API since it
can corrupt other data. Instead, create new u32 variable and typecast
it back to u16 after it gets updated by the API.
Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bcb28f33ee..6fdbe80334 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -971,7 +971,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
- u32 flags = 0;
+ u32 ch_id, flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
uint32_t max_rx_pktlen;
@@ -1095,7 +1095,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_IF_RX_CONTEXT_STASH;
/*Create a channel and associate given queue with the channel*/
- qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ qman_alloc_pool_range(&ch_id, 1, 1, 0);
+ rxq->ch_id = (u16)ch_id;
+
opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
opts.fqd.dest.channel = rxq->ch_id;
opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.999944261 +0800
+++ 0084-net-dpaa-fix-typecasting-channel-ID.patch 2024-11-11 14:23:05.222192838 +0800
@@ -1 +1 @@
-From 5edc61ee9a2c1e1d9c8b75faac4b61de7111c34e Mon Sep 17 00:00:00 2001
+From 9a1711950ed89b16332f4cc6e378de7269815ff6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5edc61ee9a2c1e1d9c8b75faac4b61de7111c34e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 51f5422e0c..afeca4307e 100644
+index bcb28f33ee..6fdbe80334 100644
@@ -23 +25 @@
-@@ -972,7 +972,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+@@ -971,7 +971,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
@@ -32 +34 @@
-@@ -1096,7 +1096,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+@@ -1095,7 +1095,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (83 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
` (35 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=747cfbc98b9ef13dcd7aeaecbf439f2a12cfbf85
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 747cfbc98b9ef13dcd7aeaecbf439f2a12cfbf85 Mon Sep 17 00:00:00 2001
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Date: Tue, 1 Oct 2024 16:33:10 +0530
Subject: [PATCH] bus/dpaa: fix VSP for 1G fm1-mac9 and 10
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 25434831ca958583fb79e1e8b06e83274c68fc93 ]
No need to classify interface separately for 1G and 10G
Note that VSP or Virtual storage profile are DPAA equivalent for SRIOV
config to logically divide a physical ports in virtual ports.
Fixes: e0718bb2ca95 ("bus/dpaa: add virtual storage profile port init")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 1814372a40..8263d42bed 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -153,7 +153,7 @@ static void fman_if_vsp_init(struct __fman_if *__if)
size_t lenp;
const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- if (__if->__if.mac_type == fman_mac_1g) {
+ if (__if->__if.mac_idx <= 8) {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-1g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
@@ -176,7 +176,32 @@ static void fman_if_vsp_init(struct __fman_if *__if)
}
}
}
- } else if (__if->__if.mac_type == fman_mac_10g) {
+
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-op-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+
+ if (prop) {
+ cell_index = of_read_number(&prop[0],
+ lenp / sizeof(phandle));
+
+ if (cell_index == __if->__if.mac_idx) {
+ prop = of_get_property(dev,
+ "vsp-window",
+ &lenp);
+
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(&prop[0],
+ 1);
+ __if->__if.base_profile_id =
+ of_read_number(&prop[1],
+ 1);
+ }
+ }
+ }
+ }
+ } else {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-10g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.033905660 +0800
+++ 0085-bus-dpaa-fix-VSP-for-1G-fm1-mac9-and-10.patch 2024-11-11 14:23:05.222192838 +0800
@@ -1 +1 @@
-From 25434831ca958583fb79e1e8b06e83274c68fc93 Mon Sep 17 00:00:00 2001
+From 747cfbc98b9ef13dcd7aeaecbf439f2a12cfbf85 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 25434831ca958583fb79e1e8b06e83274c68fc93 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 41195eb0a7..beeb03dbf2 100644
+index 1814372a40..8263d42bed 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'bus/dpaa: fix the fman details status' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (84 preceding siblings ...)
2024-11-11 6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
` (34 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=adfe2ce7037c09e6a0b37d1d1381775e6b619f96
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From adfe2ce7037c09e6a0b37d1d1381775e6b619f96 Mon Sep 17 00:00:00 2001
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Date: Tue, 1 Oct 2024 16:33:11 +0530
Subject: [PATCH] bus/dpaa: fix the fman details status
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a87a1d0f4e7667fa3d6b818f30aa5c062e567597 ]
Fix the incorrect placing of brackets to calculate stats.
This corrects the "(a | b) << 32" to "a | (b << 32)"
Fixes: e62a3f4183f1 ("bus/dpaa: fix statistics reading")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 24a99f7235..97e792806f 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -243,10 +243,11 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
int i;
uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
- for (i = 0; i < n; i++)
- value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) |
- (u64)in_be32((char *)regs + base_offset +
- 8 * i + 4)) << 32);
+ for (i = 0; i < n; i++) {
+ uint64_t a = in_be32((char *)regs + base_offset + 8 * i);
+ uint64_t b = in_be32((char *)regs + base_offset + 8 * i + 4);
+ value[i] = a | b << 32;
+ }
}
void
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.065665660 +0800
+++ 0086-bus-dpaa-fix-the-fman-details-status.patch 2024-11-11 14:23:05.222192838 +0800
@@ -1 +1 @@
-From a87a1d0f4e7667fa3d6b818f30aa5c062e567597 Mon Sep 17 00:00:00 2001
+From adfe2ce7037c09e6a0b37d1d1381775e6b619f96 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a87a1d0f4e7667fa3d6b818f30aa5c062e567597 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/dpaa: fix reallocate mbuf handling' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (85 preceding siblings ...)
2024-11-11 6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
` (33 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Vanshika Shukla; +Cc: xuemingl, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b430ea372d74ca994443f2c8086311d1e3b1fcc8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b430ea372d74ca994443f2c8086311d1e3b1fcc8 Mon Sep 17 00:00:00 2001
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Date: Tue, 1 Oct 2024 16:33:25 +0530
Subject: [PATCH] net/dpaa: fix reallocate mbuf handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7594cafa92189fd5bad87a5caa6b7a92bbab0979 ]
This patch fixes the bug in the reallocate_mbuf code
handling. The source location is corrected when copying
the data in the new mbuf.
Fixes: f8c7a17a48c9 ("net/dpaa: support Tx scatter gather for non-DPAA buffer")
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index ce4f3d6c85..018d55bbdc 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1034,7 +1034,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
/* Copy the data */
data = rte_pktmbuf_append(new_mbufs[0], bytes_to_copy);
- rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(mbuf,
+ rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(temp_mbuf,
void *, offset1), bytes_to_copy);
/* Set new offsets and the temp buffers */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.097428559 +0800
+++ 0087-net-dpaa-fix-reallocate-mbuf-handling.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 7594cafa92189fd5bad87a5caa6b7a92bbab0979 Mon Sep 17 00:00:00 2001
+From b430ea372d74ca994443f2c8086311d1e3b1fcc8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7594cafa92189fd5bad87a5caa6b7a92bbab0979 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 1d7efdef88..247e7b92ba 100644
+index ce4f3d6c85..018d55bbdc 100644
@@ -23 +25 @@
-@@ -1223,7 +1223,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
+@@ -1034,7 +1034,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (86 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
` (32 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Rushil Gupta, Praveen Kaligineedi, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4c28c4f7677e0841e37ab4cf4e6db264e41ad8df
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4c28c4f7677e0841e37ab4cf4e6db264e41ad8df Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Tue, 1 Oct 2024 16:48:52 -0700
Subject: [PATCH] net/gve: fix mbuf allocation memory leak for DQ Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 265daac8a53aaaad89f562c201bc6c269d7817fc ]
Currently, gve_rxq_mbufs_alloc_dqo() allocates RING_SIZE buffers, but
only posts RING_SIZE - 1 of them, inevitably leaking a buffer every
time queues are stopped/started. This could eventually lead to running
out of mbufs if an application stops/starts traffic enough.
Fixes: b044845bb015 ("net/gve: support queue start/stop")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
---
.mailmap | 1 +
drivers/net/gve/gve_rx_dqo.c | 16 +++++++++-------
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/.mailmap b/.mailmap
index 674384cfc5..c26a1acf7a 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1141,6 +1141,7 @@ Pradeep Satyanarayana <pradeep@us.ibm.com>
Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
Prashant Upadhyaya <prashant.upadhyaya@aricent.com> <praupadhyaya@gmail.com>
Prateek Agarwal <prateekag@cse.iitb.ac.in>
+Praveen Kaligineedi <pkaligineedi@google.com>
Praveen Shetty <praveen.shetty@intel.com>
Pravin Pathak <pravin.pathak@intel.com>
Prince Takkar <ptakkar@marvell.com>
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index a56cdbf11b..855c06dc11 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -335,34 +335,36 @@ static int
gve_rxq_mbufs_alloc_dqo(struct gve_rx_queue *rxq)
{
struct rte_mbuf *nmb;
+ uint16_t rx_mask;
uint16_t i;
int diag;
- diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], rxq->nb_rx_desc);
+ rx_mask = rxq->nb_rx_desc - 1;
+ diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0],
+ rx_mask);
if (diag < 0) {
rxq->stats.no_mbufs_bulk++;
- for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+ for (i = 0; i < rx_mask; i++) {
nmb = rte_pktmbuf_alloc(rxq->mpool);
if (!nmb)
break;
rxq->sw_ring[i] = nmb;
}
if (i < rxq->nb_rx_desc - 1) {
- rxq->stats.no_mbufs += rxq->nb_rx_desc - 1 - i;
+ rxq->stats.no_mbufs += rx_mask - i;
return -ENOMEM;
}
}
- for (i = 0; i < rxq->nb_rx_desc; i++) {
- if (i == rxq->nb_rx_desc - 1)
- break;
+ for (i = 0; i < rx_mask; i++) {
nmb = rxq->sw_ring[i];
rxq->rx_ring[i].buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
rxq->rx_ring[i].buf_id = rte_cpu_to_le_16(i);
}
+ rxq->rx_ring[rx_mask].buf_id = rte_cpu_to_le_16(rx_mask);
rxq->nb_rx_hold = 0;
- rxq->bufq_tail = rxq->nb_rx_desc - 1;
+ rxq->bufq_tail = rx_mask;
rte_write32(rxq->bufq_tail, rxq->qrx_tail);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.130822358 +0800
+++ 0088-net-gve-fix-mbuf-allocation-memory-leak-for-DQ-Rx.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 265daac8a53aaaad89f562c201bc6c269d7817fc Mon Sep 17 00:00:00 2001
+From 4c28c4f7677e0841e37ab4cf4e6db264e41ad8df Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 265daac8a53aaaad89f562c201bc6c269d7817fc ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 6e72362ebc..7b3a20af68 100644
+index 674384cfc5..c26a1acf7a 100644
@@ -26 +28,2 @@
-@@ -1193,6 +1193,7 @@ Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
+@@ -1141,6 +1141,7 @@ Pradeep Satyanarayana <pradeep@us.ibm.com>
+ Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
@@ -29 +31,0 @@
- Prathisna Padmasanan <prathisna.padmasanan@intel.com>
@@ -35 +37 @@
-index d8e9eee4a8..81a68f0c7e 100644
+index a56cdbf11b..855c06dc11 100644
@@ -38 +40 @@
-@@ -395,34 +395,36 @@ static int
+@@ -335,34 +335,36 @@ static int
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve: always attempt Rx refill on DQ' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (87 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
` (31 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Praveen Kaligineedi, Rushil Gupta, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=efbc64f353914ae5763f5b201e1d597d6c7b5044
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From efbc64f353914ae5763f5b201e1d597d6c7b5044 Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Tue, 1 Oct 2024 16:45:33 -0700
Subject: [PATCH] net/gve: always attempt Rx refill on DQ
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 31d2149719b716dfc8a30f2fc4fe4bd2e02f7a50 ]
Before this patch, gve_rx_refill_dqo() is only called if the number of
packets received in a cycle is non-zero. However, in a
memory-constrained scenario, this doesn't behave well, as this could be
a potential source of lockup, if there is no memory and all buffers have
been received before memory is freed up for the driver to use.
This patch moves the gve_rx_refill_dqo() call to occur regardless of
whether packets have been received so that in the case that enough
memory is freed, the driver can recover.
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
drivers/net/gve/gve_rx_dqo.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 855c06dc11..0203d23b9a 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -135,14 +135,12 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
- if (rx_id_bufq != rxq->next_avail)
- rxq->next_avail = rx_id_bufq;
-
- gve_rx_refill_dqo(rxq);
+ rxq->next_avail = rx_id_bufq;
rxq->stats.packets += nb_rx;
rxq->stats.bytes += bytes;
}
+ gve_rx_refill_dqo(rxq);
return nb_rx;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.176827557 +0800
+++ 0089-net-gve-always-attempt-Rx-refill-on-DQ.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 31d2149719b716dfc8a30f2fc4fe4bd2e02f7a50 Mon Sep 17 00:00:00 2001
+From efbc64f353914ae5763f5b201e1d597d6c7b5044 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 31d2149719b716dfc8a30f2fc4fe4bd2e02f7a50 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 81a68f0c7e..e4084bc0dd 100644
+index 855c06dc11..0203d23b9a 100644
@@ -30 +32 @@
-@@ -195,14 +195,12 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+@@ -135,14 +135,12 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: fix type declaration of some variables' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (88 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
` (30 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Qin Ke; +Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=33d19bdd507223b7e7f2b30332642338e3f0f058
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 33d19bdd507223b7e7f2b30332642338e3f0f058 Mon Sep 17 00:00:00 2001
From: Qin Ke <qin.ke@corigine.com>
Date: Thu, 5 Sep 2024 14:25:04 +0800
Subject: [PATCH] net/nfp: fix type declaration of some variables
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 93ebb1e57e3ff5ce34168058d73a82ae206255cc ]
The type declaration of variable 'speed' and 'i' in
'nfp_net_link_speed_rte2nfp()' is not correct, fix it.
Fixes: 36a9abd4b679 ("net/nfp: write link speed to control BAR")
Signed-off-by: Qin Ke <qin.ke@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/nfp_net_common.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 0491912bd3..08cb5f7d3b 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -153,10 +153,10 @@ static const uint32_t nfp_net_link_speed_nfp2rte[] = {
[NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
-static uint16_t
-nfp_net_link_speed_rte2nfp(uint16_t speed)
+static size_t
+nfp_net_link_speed_rte2nfp(uint32_t speed)
{
- uint16_t i;
+ size_t i;
for (i = 0; i < RTE_DIM(nfp_net_link_speed_nfp2rte); i++) {
if (speed == nfp_net_link_speed_nfp2rte[i])
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.219265057 +0800
+++ 0090-net-nfp-fix-type-declaration-of-some-variables.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 93ebb1e57e3ff5ce34168058d73a82ae206255cc Mon Sep 17 00:00:00 2001
+From 33d19bdd507223b7e7f2b30332642338e3f0f058 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 93ebb1e57e3ff5ce34168058d73a82ae206255cc ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index d93f8157b9..780e9829d0 100644
+index 0491912bd3..08cb5f7d3b 100644
@@ -24 +26 @@
-@@ -157,10 +157,10 @@ static const uint32_t nfp_net_link_speed_nfp2rte[] = {
+@@ -153,10 +153,10 @@ static const uint32_t nfp_net_link_speed_nfp2rte[] = {
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: fix representor port link status update' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (89 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
` (29 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Qin Ke; +Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d2484a7f388c787198637ea4e39a2722ab231f15
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d2484a7f388c787198637ea4e39a2722ab231f15 Mon Sep 17 00:00:00 2001
From: Qin Ke <qin.ke@corigine.com>
Date: Thu, 5 Sep 2024 14:25:11 +0800
Subject: [PATCH] net/nfp: fix representor port link status update
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d95cf21d2ed6630d21b5b1ca4abc40155720cd3f ]
The link status of representor port is reported by the flower
firmware through control message and it already parsed and
stored in the 'link' field of representor port structure.
The original logic read link status from the control BAR again,
and use it rather then the 'link' field of the representor port
structure in the following logic wrongly.
Fix this by delete the read control BAR statement and use the
right link status value.
Fixes: c4de52eca76c ("net/nfp: remove redundancy for representor port")
Signed-off-by: Qin Ke <qin.ke@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_representor.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 88fb6975af..23709acbba 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -23,7 +23,6 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete)
{
int ret;
- uint32_t nn_link_status;
struct nfp_net_hw *pf_hw;
struct rte_eth_link *link;
struct nfp_flower_representor *repr;
@@ -32,9 +31,7 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
link = &repr->link;
pf_hw = repr->app_fw_flower->pf_hw;
- nn_link_status = nn_cfg_readw(&pf_hw->super, NFP_NET_CFG_STS);
-
- ret = nfp_net_link_update_common(dev, pf_hw, link, nn_link_status);
+ ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
return ret;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.255512556 +0800
+++ 0091-net-nfp-fix-representor-port-link-status-update.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From d95cf21d2ed6630d21b5b1ca4abc40155720cd3f Mon Sep 17 00:00:00 2001
+From d2484a7f388c787198637ea4e39a2722ab231f15 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d95cf21d2ed6630d21b5b1ca4abc40155720cd3f ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -25,2 +27,2 @@
- drivers/net/nfp/flower/nfp_flower_representor.c | 7 +------
- 1 file changed, 1 insertion(+), 6 deletions(-)
+ drivers/net/nfp/flower/nfp_flower_representor.c | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
@@ -29 +31 @@
-index 054ea1a938..5db7d50618 100644
+index 88fb6975af..23709acbba 100644
@@ -32 +34 @@
-@@ -29,18 +29,13 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
+@@ -23,7 +23,6 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
@@ -37 +39 @@
-- struct nfp_net_hw *pf_hw;
+ struct nfp_net_hw *pf_hw;
@@ -40,2 +42 @@
-
- repr = dev->data->dev_private;
+@@ -32,9 +31,7 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
@@ -44 +45 @@
-- pf_hw = repr->app_fw_flower->pf_hw;
+ pf_hw = repr->app_fw_flower->pf_hw;
@@ -47,2 +48,2 @@
-- ret = nfp_net_link_update_common(dev, link, nn_link_status);
-+ ret = nfp_net_link_update_common(dev, link, link->link_status);
+- ret = nfp_net_link_update_common(dev, pf_hw, link, nn_link_status);
++ ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve: fix refill logic causing memory corruption' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (90 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
` (28 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Rushil Gupta, Praveen Kaligineedi, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7907e4749624ac43a40a71bc200faa46d2e219dc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7907e4749624ac43a40a71bc200faa46d2e219dc Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Thu, 3 Oct 2024 18:05:18 -0700
Subject: [PATCH] net/gve: fix refill logic causing memory corruption
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 52c9b4069b216495d6e709bb500b6a52b8b2ca82 ]
There is a seemingly mundane error in the RX refill path which can lead
to major issues and ultimately program crashing.
This error occurs as part of an edge case where the exact number of
buffers the refill causes the ring to wrap around to 0. The current
refill logic is split into two conditions: first, when the number of
buffers to refill is greater than the number of buffers left in the ring
before wraparound occurs; second, when the opposite is true, and there
are enough buffers before wraparound to refill all buffers.
In this edge case, the first condition erroneously uses a (<) condition
to decide whether to wrap around, when it should have been (<=). In that
case, the second condition would run and the tail pointer would be set
to an invalid value (RING_SIZE). This causes a number of cascading
failures.
1. The first issue rather mundane in that rxq->bufq_tail == RING_SIZE at
the end of the refill, this will correct itself on the next refill
without any sort of memory leak or corruption;
2. The second failure is that the head pointer would end up overrunning
the tail because the last buffer that is refilled is refilled at
sw_ring[RING_SIZE] instead of sw_ring[0]. This would cause the driver
to give the application a stale mbuf, one that has been potentially
freed or is otherwise stale;
3. The third failure comes from the fact that the software ring is being
overrun. Because we directly use the sw_ring pointer to refill
buffers, when sw_ring[RING_SIZE] is filled, a buffer overflow occurs.
The overwritten data has the potential to be important data, and this
can potentially cause the program to crash outright.
This patch fixes the refill bug while greatly simplifying the logic so
that it is much less error-prone.
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
---
drivers/net/gve/gve_rx_dqo.c | 62 ++++++++++--------------------------
1 file changed, 16 insertions(+), 46 deletions(-)
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 0203d23b9a..f55a03f8c4 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -10,66 +10,36 @@
static inline void
gve_rx_refill_dqo(struct gve_rx_queue *rxq)
{
- volatile struct gve_rx_desc_dqo *rx_buf_ring;
volatile struct gve_rx_desc_dqo *rx_buf_desc;
struct rte_mbuf *nmb[rxq->nb_rx_hold];
uint16_t nb_refill = rxq->nb_rx_hold;
- uint16_t nb_desc = rxq->nb_rx_desc;
uint16_t next_avail = rxq->bufq_tail;
struct rte_eth_dev *dev;
uint64_t dma_addr;
- uint16_t delta;
int i;
if (rxq->nb_rx_hold < rxq->free_thresh)
return;
- rx_buf_ring = rxq->rx_ring;
- delta = nb_desc - next_avail;
- if (unlikely(delta < nb_refill)) {
- if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) {
- for (i = 0; i < delta; i++) {
- rx_buf_desc = &rx_buf_ring[next_avail + i];
- rxq->sw_ring[next_avail + i] = nmb[i];
- dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
- rx_buf_desc->header_buf_addr = 0;
- rx_buf_desc->buf_addr = dma_addr;
- }
- nb_refill -= delta;
- next_avail = 0;
- rxq->nb_rx_hold -= delta;
- } else {
- rxq->stats.no_mbufs_bulk++;
- rxq->stats.no_mbufs += nb_desc - next_avail;
- dev = &rte_eth_devices[rxq->port_id];
- dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
- PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
- rxq->port_id, rxq->queue_id);
- return;
- }
+ if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill))) {
+ rxq->stats.no_mbufs_bulk++;
+ rxq->stats.no_mbufs += nb_refill;
+ dev = &rte_eth_devices[rxq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_refill;
+ PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rxq->port_id, rxq->queue_id);
+ return;
}
- if (nb_desc - next_avail >= nb_refill) {
- if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) {
- for (i = 0; i < nb_refill; i++) {
- rx_buf_desc = &rx_buf_ring[next_avail + i];
- rxq->sw_ring[next_avail + i] = nmb[i];
- dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
- rx_buf_desc->header_buf_addr = 0;
- rx_buf_desc->buf_addr = dma_addr;
- }
- next_avail += nb_refill;
- rxq->nb_rx_hold -= nb_refill;
- } else {
- rxq->stats.no_mbufs_bulk++;
- rxq->stats.no_mbufs += nb_desc - next_avail;
- dev = &rte_eth_devices[rxq->port_id];
- dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
- PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
- rxq->port_id, rxq->queue_id);
- }
+ for (i = 0; i < nb_refill; i++) {
+ rx_buf_desc = &rxq->rx_ring[next_avail];
+ rxq->sw_ring[next_avail] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->header_buf_addr = 0;
+ rx_buf_desc->buf_addr = dma_addr;
+ next_avail = (next_avail + 1) & (rxq->nb_rx_desc - 1);
}
-
+ rxq->nb_rx_hold -= nb_refill;
rte_write32(next_avail, rxq->qrx_tail);
rxq->bufq_tail = next_avail;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.294538355 +0800
+++ 0092-net-gve-fix-refill-logic-causing-memory-corruption.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 52c9b4069b216495d6e709bb500b6a52b8b2ca82 Mon Sep 17 00:00:00 2001
+From 7907e4749624ac43a40a71bc200faa46d2e219dc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 52c9b4069b216495d6e709bb500b6a52b8b2ca82 ]
@@ -40 +42,0 @@
-Cc: stable@dpdk.org
@@ -50 +52 @@
-index e4084bc0dd..5371bab77d 100644
+index 0203d23b9a..f55a03f8c4 100644
@@ -53 +55 @@
-@@ -11,66 +11,36 @@
+@@ -10,66 +10,36 @@
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve: add IO memory barriers before reading descriptors' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (91 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
` (27 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Praveen Kaligineedi, Rushil Gupta, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1c6a6173878598d07e8fbfef9fe64114a89cb003
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1c6a6173878598d07e8fbfef9fe64114a89cb003 Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Thu, 3 Oct 2024 18:05:35 -0700
Subject: [PATCH] net/gve: add IO memory barriers before reading descriptors
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f8fee84eb48cdf13a7a29f5851a2e2a41045813a ]
Without memory barriers, there is no guarantee that the CPU will
actually wait until after the descriptor has been fully written before
loading descriptor data. In this case, it is possible that stale data is
read and acted on by the driver when processing TX or RX completions.
This change adds read memory barriers just after the generation bit is
read in both the RX and the TX path to ensure that the NIC has properly
passed ownership to the driver before descriptor data is read in full.
Note that memory barriers should not be needed after writing the RX
buffer queue/TX descriptor queue tails because rte_write32 includes an
implicit write memory barrier.
Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
drivers/net/gve/gve_rx_dqo.c | 2 ++
drivers/net/gve/gve_tx_dqo.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index f55a03f8c4..3f694a4d9a 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -72,6 +72,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rx_desc->generation != rxq->cur_gen_bit)
break;
+ rte_io_rmb();
+
if (unlikely(rx_desc->rx_error)) {
rxq->stats.errors++;
continue;
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index b9d6d01749..ce3681b6c6 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -24,6 +24,8 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
if (compl_desc->generation != txq->cur_gen_bit)
return;
+ rte_io_rmb();
+
compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag);
aim_txq = txq->txqs[compl_desc->id];
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.326330154 +0800
+++ 0093-net-gve-add-IO-memory-barriers-before-reading-descri.patch 2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From f8fee84eb48cdf13a7a29f5851a2e2a41045813a Mon Sep 17 00:00:00 2001
+From 1c6a6173878598d07e8fbfef9fe64114a89cb003 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f8fee84eb48cdf13a7a29f5851a2e2a41045813a ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
@@ -32 +34 @@
-index 5371bab77d..285c6ddd61 100644
+index f55a03f8c4..3f694a4d9a 100644
@@ -35 +37 @@
-@@ -132,6 +132,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+@@ -72,6 +72,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
@@ -45 +47 @@
-index 731c287224..6984f92443 100644
+index b9d6d01749..ce3681b6c6 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/memif: fix buffer overflow in zero copy Rx' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (92 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
` (26 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Mihai Brodschi; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3061d87b232c715422e2fe93017afc85f528fc40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3061d87b232c715422e2fe93017afc85f528fc40 Mon Sep 17 00:00:00 2001
From: Mihai Brodschi <mihai.brodschi@broadcom.com>
Date: Sat, 29 Jun 2024 00:01:29 +0300
Subject: [PATCH] net/memif: fix buffer overflow in zero copy Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b92b18b76858ed58ebe9c5dea9dedf9a99e7e0e2 ]
rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
new mbufs to be provided to the sender. The allocated mbuf pointers
are stored in a ring, but the alloc function doesn't implement index
wrap-around, so it writes past the end of the array. This results in
memory corruption and duplicate mbufs being received.
Allocate 2x the space for the mbuf ring, so that the alloc function
has a contiguous array to write to, then copy the excess entries
to the start of the array.
Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
.mailmap | 1 +
drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index c26a1acf7a..8d7fa55d9e 100644
--- a/.mailmap
+++ b/.mailmap
@@ -971,6 +971,7 @@ Michal Swiatkowski <michal.swiatkowski@intel.com>
Michal Wilczynski <michal.wilczynski@intel.com>
Michel Machado <michel@digirati.com.br>
Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
+Mihai Brodschi <mihai.brodschi@broadcom.com>
Mihai Pogonaru <pogonarumihai@gmail.com>
Mike Baucom <michael.baucom@broadcom.com>
Mike Pattrick <mkp@redhat.com>
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index f05f4c24df..1eb41bb471 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -600,6 +600,10 @@ refill:
ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
if (unlikely(ret < 0))
goto no_free_mbufs;
+ if (unlikely(n_slots > ring_size - (head & mask))) {
+ rte_memcpy(mq->buffers, &mq->buffers[ring_size],
+ (n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
+ }
while (n_slots--) {
s0 = head++ & mask;
@@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
}
mq->buffers = NULL;
if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
+ /*
+ * Allocate 2x ring_size to reserve a contiguous array for
+ * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
+ */
mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
- (1 << mq->log2_ring_size), 0);
+ (1 << (mq->log2_ring_size + 1)), 0);
if (mq->buffers == NULL)
return -ENOMEM;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.360403854 +0800
+++ 0094-net-memif-fix-buffer-overflow-in-zero-copy-Rx.patch 2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From b92b18b76858ed58ebe9c5dea9dedf9a99e7e0e2 Mon Sep 17 00:00:00 2001
+From 3061d87b232c715422e2fe93017afc85f528fc40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b92b18b76858ed58ebe9c5dea9dedf9a99e7e0e2 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 7b3a20af68..2e909c48a8 100644
+index c26a1acf7a..8d7fa55d9e 100644
@@ -30 +32,2 @@
-@@ -1011,6 +1011,7 @@ Michal Wilczynski <michal.wilczynski@intel.com>
+@@ -971,6 +971,7 @@ Michal Swiatkowski <michal.swiatkowski@intel.com>
+ Michal Wilczynski <michal.wilczynski@intel.com>
@@ -32 +34,0 @@
- Midde Ajijur Rehaman <ajijurx.rehaman.midde@intel.com>
@@ -39 +41 @@
-index e220ffaf92..cd722f254f 100644
+index f05f4c24df..1eb41bb471 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/tap: restrict maximum number of MP FDs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (93 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
` (25 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7c8fbea353f3f8fe7d076d22c5a5f5aab4e7f683
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7c8fbea353f3f8fe7d076d22c5a5f5aab4e7f683 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 11 Oct 2024 10:29:23 -0700
Subject: [PATCH] net/tap: restrict maximum number of MP FDs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 288649a11a8a332727f2a988c676ff7dfd1bc4c5 ]
Now that max MP fds has increased to 253 it is possible that
the number of queues the TAP device can handle is less than that.
Therefore the code to handle MP message should only allow the
number of queues it can handle.
Coverity issue: 445386
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/tap/rte_eth_tap.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 3fa03cdbee..93bba3cec1 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -2393,9 +2393,10 @@ tap_mp_sync_queues(const struct rte_mp_msg *request, const void *peer)
/* Fill file descriptors for all queues */
reply.num_fds = 0;
reply_param->rxq_count = 0;
- if (dev->data->nb_rx_queues + dev->data->nb_tx_queues >
- RTE_MP_MAX_FD_NUM){
- TAP_LOG(ERR, "Number of rx/tx queues exceeds max number of fds");
+
+ if (dev->data->nb_rx_queues > RTE_PMD_TAP_MAX_QUEUES) {
+ TAP_LOG(ERR, "Number of rx/tx queues %u exceeds max number of fds %u",
+ dev->data->nb_rx_queues, RTE_PMD_TAP_MAX_QUEUES);
return -1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.395312653 +0800
+++ 0095-net-tap-restrict-maximum-number-of-MP-FDs.patch 2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From 288649a11a8a332727f2a988c676ff7dfd1bc4c5 Mon Sep 17 00:00:00 2001
+From 7c8fbea353f3f8fe7d076d22c5a5f5aab4e7f683 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 288649a11a8a332727f2a988c676ff7dfd1bc4c5 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -17,2 +19,2 @@
- drivers/net/tap/rte_eth_tap.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
+ drivers/net/tap/rte_eth_tap.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
@@ -21 +23 @@
-index 5ad3bbadd1..c486c6f073 100644
+index 3fa03cdbee..93bba3cec1 100644
@@ -24,5 +26,7 @@
-@@ -2391,9 +2391,10 @@ tap_mp_sync_queues(const struct rte_mp_msg *request, const void *peer)
- reply_param->q_count = 0;
-
- RTE_ASSERT(dev->data->nb_rx_queues == dev->data->nb_tx_queues);
-- if (dev->data->nb_rx_queues > RTE_MP_MAX_FD_NUM) {
+@@ -2393,9 +2393,10 @@ tap_mp_sync_queues(const struct rte_mp_msg *request, const void *peer)
+ /* Fill file descriptors for all queues */
+ reply.num_fds = 0;
+ reply_param->rxq_count = 0;
+- if (dev->data->nb_rx_queues + dev->data->nb_tx_queues >
+- RTE_MP_MAX_FD_NUM){
+- TAP_LOG(ERR, "Number of rx/tx queues exceeds max number of fds");
@@ -31,2 +35 @@
- TAP_LOG(ERR, "Number of rx/tx queues %u exceeds max number of fds %u",
-- dev->data->nb_rx_queues, RTE_MP_MAX_FD_NUM);
++ TAP_LOG(ERR, "Number of rx/tx queues %u exceeds max number of fds %u",
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'ethdev: verify queue ID in Tx done cleanup' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (94 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
` (24 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d745b41ff05ca149ab263e21423ceb87bf017d45
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d745b41ff05ca149ab263e21423ceb87bf017d45 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 12 Oct 2024 17:14:37 +0800
Subject: [PATCH] ethdev: verify queue ID in Tx done cleanup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 707f50cef003a89f8fc5170c2ca5aea808cf4297 ]
Verify queue_id for rte_eth_tx_done_cleanup API.
Fixes: 44a718c457b5 ("ethdev: add API to free consumed buffers in Tx ring")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
lib/ethdev/rte_ethdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1f067873a9..dfcdf76fee 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -2823,6 +2823,12 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
+#ifdef RTE_ETHDEV_DEBUG_TX
+ ret = eth_dev_validate_tx_queue(dev, queue_id);
+ if (ret != 0)
+ return ret;
+#endif
+
if (*dev->dev_ops->tx_done_cleanup == NULL)
return -ENOTSUP;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.437338552 +0800
+++ 0096-ethdev-verify-queue-ID-in-Tx-done-cleanup.patch 2024-11-11 14:23:05.252192837 +0800
@@ -1 +1 @@
-From 707f50cef003a89f8fc5170c2ca5aea808cf4297 Mon Sep 17 00:00:00 2001
+From d745b41ff05ca149ab263e21423ceb87bf017d45 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 707f50cef003a89f8fc5170c2ca5aea808cf4297 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 12f42f1d68..6413c54e3b 100644
+index 1f067873a9..dfcdf76fee 100644
@@ -21 +23 @@
-@@ -2910,6 +2910,12 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
+@@ -2823,6 +2823,12 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/hns3: verify reset type from firmware' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (95 preceding siblings ...)
2024-11-11 6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
` (23 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, Jie Hai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=689c302c20ced757d4a0790c984aefc73c37daec
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 689c302c20ced757d4a0790c984aefc73c37daec Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 12 Oct 2024 17:14:57 +0800
Subject: [PATCH] net/hns3: verify reset type from firmware
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3db846003734d38d59950ebe024ad6d61afe08f0 ]
Verify reset-type which get from firmware.
Fixes: 1c1eb759e9d7 ("net/hns3: support RAS process in Kunpeng 930")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_intr.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 0b768ef140..3f6b9e7fc4 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -2252,6 +2252,12 @@ hns3_handle_module_error_data(struct hns3_hw *hw, uint32_t *buf,
sum_err_info = (struct hns3_sum_err_info *)&buf[offset++];
mod_num = sum_err_info->mod_num;
reset_type = sum_err_info->reset_type;
+
+ if (reset_type >= HNS3_MAX_RESET) {
+ hns3_err(hw, "invalid reset type = %u", reset_type);
+ return;
+ }
+
if (reset_type && reset_type != HNS3_NONE_RESET)
hns3_atomic_set_bit(reset_type, &hw->reset.request);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.495799151 +0800
+++ 0097-net-hns3-verify-reset-type-from-firmware.patch 2024-11-11 14:23:05.252192837 +0800
@@ -1 +1 @@
-From 3db846003734d38d59950ebe024ad6d61afe08f0 Mon Sep 17 00:00:00 2001
+From 689c302c20ced757d4a0790c984aefc73c37daec Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3db846003734d38d59950ebe024ad6d61afe08f0 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index f7162ee7bc..2de2b86b02 100644
+index 0b768ef140..3f6b9e7fc4 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: fix link change return value' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (96 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
` (22 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chaoyong He; +Cc: xuemingl, Long Wu, Peng Zhang, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=882c4f80f60bf33988eb31bf85d6104e7461be6d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 882c4f80f60bf33988eb31bf85d6104e7461be6d Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Sat, 12 Oct 2024 10:41:02 +0800
Subject: [PATCH] net/nfp: fix link change return value
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0ca4f216b89162ce8142d665a98924bdf4a23a6e ]
The return value of 'nfp_eth_set_configured()' is three ways, the
original logic considered it as two ways wrongly.
Fixes: 61d4008fe6bb ("net/nfp: support setting link up/down")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/nfp/nfp_ethdev.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 7495b01f16..e704c90dc5 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -201,30 +201,40 @@ error:
static int
nfp_net_set_link_up(struct rte_eth_dev *dev)
{
+ int ret;
struct nfp_net_hw *hw;
hw = dev->data->dev_private;
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
/* Configure the physical port down */
- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
+ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
else
- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
+ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
+ if (ret < 0)
+ return ret;
+
+ return 0;
}
/* Set the link down. */
static int
nfp_net_set_link_down(struct rte_eth_dev *dev)
{
+ int ret;
struct nfp_net_hw *hw;
hw = dev->data->dev_private;
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
/* Configure the physical port down */
- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
+ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
else
- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
+ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
+ if (ret < 0)
+ return ret;
+
+ return 0;
}
static uint8_t
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.547676450 +0800
+++ 0098-net-nfp-fix-link-change-return-value.patch 2024-11-11 14:23:05.252192837 +0800
@@ -1 +1 @@
-From 0ca4f216b89162ce8142d665a98924bdf4a23a6e Mon Sep 17 00:00:00 2001
+From 882c4f80f60bf33988eb31bf85d6104e7461be6d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0ca4f216b89162ce8142d665a98924bdf4a23a6e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -17,2 +19,2 @@
- drivers/net/nfp/nfp_ethdev.c | 14 ++++++++++++--
- 1 file changed, 12 insertions(+), 2 deletions(-)
+ drivers/net/nfp/nfp_ethdev.c | 18 ++++++++++++++----
+ 1 file changed, 14 insertions(+), 4 deletions(-)
@@ -21 +23 @@
-index 4b31785b9f..ef1c2a94b7 100644
+index 7495b01f16..e704c90dc5 100644
@@ -24 +26 @@
-@@ -527,26 +527,36 @@ error:
+@@ -201,30 +201,40 @@ error:
@@ -30 +31,0 @@
- struct nfp_net_hw_priv *hw_priv;
@@ -33 +33,0 @@
- hw_priv = dev->process_private;
@@ -35,2 +35,7 @@
-- return nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 1);
-+ ret = nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 1);
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ /* Configure the physical port down */
+- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
++ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
+ else
+- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
++ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
@@ -49 +53,0 @@
- struct nfp_net_hw_priv *hw_priv;
@@ -52 +55,0 @@
- hw_priv = dev->process_private;
@@ -54,2 +57,7 @@
-- return nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 0);
-+ ret = nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 0);
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ /* Configure the physical port down */
+- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
++ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
+ else
+- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
++ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
@@ -62 +70 @@
- static void
+ static uint8_t
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: fix pause frame setting check' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (97 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
` (21 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chaoyong He; +Cc: xuemingl, Long Wu, Peng Zhang, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bffff80bf5f8d18afac9d8bb62cd32aa4f5d14b5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bffff80bf5f8d18afac9d8bb62cd32aa4f5d14b5 Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Sat, 12 Oct 2024 10:41:04 +0800
Subject: [PATCH] net/nfp: fix pause frame setting check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4bb6de512fbc361e16d5a7a38b704735c831540d ]
The return value of 'nfp_eth_config_commit_end()' is three ways, the
original logic considered it as two ways wrongly.
Fixes: 68aa35373a94 ("net/nfp: support setting pause frame switch mode")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/nfp/nfp_net_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 08cb5f7d3b..134a9b807e 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -2218,7 +2218,7 @@ nfp_net_pause_frame_set(struct nfp_net_hw *net_hw,
}
err = nfp_eth_config_commit_end(nsp);
- if (err != 0) {
+ if (err < 0) {
PMD_DRV_LOG(ERR, "Failed to configure pause frame.");
return err;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.591013149 +0800
+++ 0099-net-nfp-fix-pause-frame-setting-check.patch 2024-11-11 14:23:05.262192837 +0800
@@ -1 +1 @@
-From 4bb6de512fbc361e16d5a7a38b704735c831540d Mon Sep 17 00:00:00 2001
+From bffff80bf5f8d18afac9d8bb62cd32aa4f5d14b5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4bb6de512fbc361e16d5a7a38b704735c831540d ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 80d60515d8..5c3a9a7ae7 100644
+index 08cb5f7d3b..134a9b807e 100644
@@ -24 +26 @@
-@@ -2520,7 +2520,7 @@ nfp_net_pause_frame_set(struct nfp_net_hw_priv *hw_priv,
+@@ -2218,7 +2218,7 @@ nfp_net_pause_frame_set(struct nfp_net_hw *net_hw,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/pcap: fix blocking Rx' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (98 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
` (20 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ofer Dagan, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=586405a26eab30b8ba04b54197aee639f89410d0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 586405a26eab30b8ba04b54197aee639f89410d0 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 5 Sep 2024 09:10:40 -0700
Subject: [PATCH] net/pcap: fix blocking Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f5ead8f84f205babb320a1d805fb436ba31a5532 ]
Use pcap_next_ex rather than just pcap_next because pcap_next
always blocks if there is no packets to receive.
Bugzilla ID: 1526
Fixes: 4c173302c307 ("pcap: add new driver")
Reported-by: Ofer Dagan <ofer.d@claroty.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Ofer Dagan <ofer.d@claroty.com>
---
.mailmap | 1 +
drivers/net/pcap/pcap_ethdev.c | 33 +++++++++++++++++----------------
2 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/.mailmap b/.mailmap
index 8d7fa55d9e..8a0ce733c3 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1059,6 +1059,7 @@ Noa Ezra <noae@mellanox.com>
Nobuhiro Miki <nmiki@yahoo-corp.jp>
Norbert Ciosek <norbertx.ciosek@intel.com>
Odi Assli <odia@nvidia.com>
+Ofer Dagan <ofer.d@claroty.com>
Ognjen Joldzic <ognjen.joldzic@gmail.com>
Ola Liljedahl <ola.liljedahl@arm.com>
Oleg Polyakov <olegp123@walla.co.il>
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index 1fb98e3d2b..728ef85d53 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -274,7 +274,7 @@ static uint16_t
eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
{
unsigned int i;
- struct pcap_pkthdr header;
+ struct pcap_pkthdr *header;
struct pmd_process_private *pp;
const u_char *packet;
struct rte_mbuf *mbuf;
@@ -294,9 +294,13 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
*/
for (i = 0; i < nb_pkts; i++) {
/* Get the next PCAP packet */
- packet = pcap_next(pcap, &header);
- if (unlikely(packet == NULL))
+ int ret = pcap_next_ex(pcap, &header, &packet);
+ if (ret != 1) {
+ if (ret == PCAP_ERROR)
+ pcap_q->rx_stat.err_pkts++;
+
break;
+ }
mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
if (unlikely(mbuf == NULL)) {
@@ -304,33 +308,30 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
break;
}
- if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
+ uint32_t len = header->caplen;
+ if (len <= rte_pktmbuf_tailroom(mbuf)) {
/* pcap packet will fit in the mbuf, can copy it */
- rte_memcpy(rte_pktmbuf_mtod(mbuf, void *), packet,
- header.caplen);
- mbuf->data_len = (uint16_t)header.caplen;
+ rte_memcpy(rte_pktmbuf_mtod(mbuf, void *), packet, len);
+ mbuf->data_len = len;
} else {
/* Try read jumbo frame into multi mbufs. */
if (unlikely(eth_pcap_rx_jumbo(pcap_q->mb_pool,
- mbuf,
- packet,
- header.caplen) == -1)) {
+ mbuf, packet, len) == -1)) {
pcap_q->rx_stat.err_pkts++;
rte_pktmbuf_free(mbuf);
break;
}
}
- mbuf->pkt_len = (uint16_t)header.caplen;
- *RTE_MBUF_DYNFIELD(mbuf, timestamp_dynfield_offset,
- rte_mbuf_timestamp_t *) =
- (uint64_t)header.ts.tv_sec * 1000000 +
- header.ts.tv_usec;
+ mbuf->pkt_len = len;
+ uint64_t us = (uint64_t)header->ts.tv_sec * US_PER_S + header->ts.tv_usec;
+
+ *RTE_MBUF_DYNFIELD(mbuf, timestamp_dynfield_offset, rte_mbuf_timestamp_t *) = us;
mbuf->ol_flags |= timestamp_rx_dynflag;
mbuf->port = pcap_q->port_id;
bufs[num_rx] = mbuf;
num_rx++;
- rx_bytes += header.caplen;
+ rx_bytes += len;
}
pcap_q->rx_stat.pkts += num_rx;
pcap_q->rx_stat.bytes += rx_bytes;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.636059148 +0800
+++ 0100-net-pcap-fix-blocking-Rx.patch 2024-11-11 14:23:05.262192837 +0800
@@ -1 +1 @@
-From f5ead8f84f205babb320a1d805fb436ba31a5532 Mon Sep 17 00:00:00 2001
+From 586405a26eab30b8ba04b54197aee639f89410d0 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f5ead8f84f205babb320a1d805fb436ba31a5532 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 3e9e8b416e..bd7958652a 100644
+index 8d7fa55d9e..8a0ce733c3 100644
@@ -25 +27,2 @@
-@@ -1103,6 +1103,7 @@ Nobuhiro Miki <nmiki@yahoo-corp.jp>
+@@ -1059,6 +1059,7 @@ Noa Ezra <noae@mellanox.com>
+ Nobuhiro Miki <nmiki@yahoo-corp.jp>
@@ -27 +29,0 @@
- Norbert Zulinski <norbertx.zulinski@intel.com>
@@ -32 +34 @@
- Oleg Akhrem <oleg.akhrem@intel.com>
+ Oleg Polyakov <olegp123@walla.co.il>
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ice/base: add bounds check' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (99 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
` (19 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Fabio Pricoco; +Cc: xuemingl, Bruce Richardson, Vladimir Medvedkin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0df9a046c7c8315e7ec101eca515c5ea7a5b44e7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0df9a046c7c8315e7ec101eca515c5ea7a5b44e7 Mon Sep 17 00:00:00 2001
From: Fabio Pricoco <fabio.pricoco@intel.com>
Date: Mon, 14 Oct 2024 12:02:06 +0100
Subject: [PATCH] net/ice/base: add bounds check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9378aa47f45fa5cd5be219c8eb770f096e8a4c27 ]
Refactor while loop to add a check that the values read are in the
correct range.
Fixes: 6c1f26be50a2 ("net/ice/base: add control queue information")
Signed-off-by: Fabio Pricoco <fabio.pricoco@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/ice/base/ice_controlq.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index c34407b48c..4896fd2731 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -846,12 +846,23 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
u16 ntc = sq->next_to_clean;
struct ice_sq_cd *details;
struct ice_aq_desc *desc;
+ u32 head;
desc = ICE_CTL_Q_DESC(*sq, ntc);
details = ICE_CTL_Q_DETAILS(*sq, ntc);
- while (rd32(hw, cq->sq.head) != ntc) {
- ice_debug(hw, ICE_DBG_AQ_MSG, "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+ head = rd32(hw, sq->head);
+ if (head >= sq->count) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Read head value (%d) exceeds allowed range.\n",
+ head);
+ return 0;
+ }
+
+ while (head != ntc) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "ntc %d head %d.\n",
+ ntc, head);
ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
ntc++;
@@ -859,6 +870,14 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
ntc = 0;
desc = ICE_CTL_Q_DESC(*sq, ntc);
details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+ head = rd32(hw, sq->head);
+ if (head >= sq->count) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Read head value (%d) exceeds allowed range.\n",
+ head);
+ return 0;
+ }
}
sq->next_to_clean = ntc;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.692233347 +0800
+++ 0101-net-ice-base-add-bounds-check.patch 2024-11-11 14:23:05.282192836 +0800
@@ -1 +1 @@
-From 9378aa47f45fa5cd5be219c8eb770f096e8a4c27 Mon Sep 17 00:00:00 2001
+From 0df9a046c7c8315e7ec101eca515c5ea7a5b44e7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9378aa47f45fa5cd5be219c8eb770f096e8a4c27 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index af27dc8542..b210495827 100644
+index c34407b48c..4896fd2731 100644
@@ -23,2 +25 @@
-@@ -839,16 +839,35 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
- struct ice_ctl_q_ring *sq = &cq->sq;
+@@ -846,12 +846,23 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -25,0 +27 @@
+ struct ice_sq_cd *details;
@@ -29,0 +32 @@
+ details = ICE_CTL_Q_DETAILS(*sq, ntc);
@@ -45,0 +49 @@
+ ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
@@ -47 +51 @@
- if (ntc == sq->count)
+@@ -859,6 +870,14 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -49,0 +54 @@
+ details = ICE_CTL_Q_DETAILS(*sq, ntc);
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ice/base: fix VLAN replay after reset' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (100 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
` (18 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Dave Ertman
Cc: xuemingl, Jacob Keller, Bruce Richardson, Vladimir Medvedkin,
dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7a744b7e5badcfdc620df91c533d2b4d2abd068a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7a744b7e5badcfdc620df91c533d2b4d2abd068a Mon Sep 17 00:00:00 2001
From: Dave Ertman <david.m.ertman@intel.com>
Date: Mon, 14 Oct 2024 12:02:07 +0100
Subject: [PATCH] net/ice/base: fix VLAN replay after reset
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8e191a67df2d217c2cbd96325b38bf2f5f028f03 ]
If there is more than one VLAN defined when any reset that affects the
PF is initiated, after the reset rebuild, no traffic will pass on any
VLAN but the last one created.
This is caused by the iteration though the VLANs during replay each
clearing the vsi_map bitmap of the VSI that is being replayed. The
problem is that during the replay, the pointer to the vsi_map bitmap is
used by each successive vlan to determine if it should be replayed on
this VSI.
The logic was that the replay of the VLAN would replace the bit in the
map before the next VLAN would iterate through. But, since the replay
copies the old bitmap pointer to filt_replay_rules and creates a new one
for the recreated VLANS, it does not do this, and leaves the old bitmap
broken to be used to replay the remaining VLANs.
Since the old bitmap will be cleaned up in post replay cleanup, there is
no need to alter it and break following VLAN replay, so don't clear the
bit.
Fixes: c7dd15931183 ("net/ice/base: add virtual switch code")
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/ice/base/ice_switch.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c4fd07199e..7b103e5e34 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -10023,8 +10023,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi,
if (!itr->vsi_list_info ||
!ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
continue;
- /* Clearing it so that the logic can add it back */
- ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
f_entry.fltr_info.vsi_handle = vsi_handle;
f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
/* update the src in case it is VSI num */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.763788646 +0800
+++ 0102-net-ice-base-fix-VLAN-replay-after-reset.patch 2024-11-11 14:23:05.292192836 +0800
@@ -1 +1 @@
-From 8e191a67df2d217c2cbd96325b38bf2f5f028f03 Mon Sep 17 00:00:00 2001
+From 7a744b7e5badcfdc620df91c533d2b4d2abd068a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8e191a67df2d217c2cbd96325b38bf2f5f028f03 ]
@@ -27 +29,0 @@
-Cc: stable@dpdk.org
@@ -38 +40 @@
-index 96ef26d535..a3786961e6 100644
+index c4fd07199e..7b103e5e34 100644
@@ -41 +43 @@
-@@ -10110,8 +10110,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi,
+@@ -10023,8 +10023,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/iavf: preserve MAC address with i40e PF Linux driver' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (101 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
` (17 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: David Marchand; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=819d57cd27c47941913084143eb32d1cb3a8bf20
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 819d57cd27c47941913084143eb32d1cb3a8bf20 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Tue, 1 Oct 2024 11:12:54 +0200
Subject: [PATCH] net/iavf: preserve MAC address with i40e PF Linux driver
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3d42086def307be853d1e2e5b9d1e76725c3661f ]
Following two upstream Linux kernel changes (see links), the mac address
of a iavf port, serviced by a i40e PF driver, is lost when the DPDK iavf
driver probes the port again (which may be triggered at any point of a
DPDK application life, like when a reset event is triggered by the PF).
A first change results in the mac address of the VF port being reset to 0
during the VIRTCHNL_OP_GET_VF_RESOURCES query.
The i40e PF driver change is pretty obscure but the iavf Linux driver does
set VIRTCHNL_VF_OFFLOAD_USO.
Announcing such a capability in the DPDK driver does not seem to be an
issue, so do the same in DPDK to keep the legacy behavior of a fixed mac.
Then a second change in the kernel results in the VF mac address being
cleared when the VF driver remove its default mac address.
Removing (unicast or multicast) mac addresses is not done by the kernel VF
driver in general.
The reason why the DPDK driver behaves like this is undocumented
(and lost because the authors are not active anymore).
Aligning DPDK behavior to the upstream kernel driver is safer in any
case.
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fed0d9f13266
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ceb29474bbbc
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_ethdev.c | 22 +++++-----------------
drivers/net/iavf/iavf_vchnl.c | 1 +
2 files changed, 6 insertions(+), 17 deletions(-)
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 1a98c7734c..9f3658c48b 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1033,7 +1033,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_configure_queues(adapter,
IAVF_CFG_Q_NUM_PER_BUF, index) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
- goto err_queue;
+ goto error;
}
num_queue_pairs -= IAVF_CFG_Q_NUM_PER_BUF;
index += IAVF_CFG_Q_NUM_PER_BUF;
@@ -1041,12 +1041,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_configure_queues(adapter, num_queue_pairs, index) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
- goto err_queue;
+ goto error;
}
if (iavf_config_rx_queues_irqs(dev, intr_handle) != 0) {
PMD_DRV_LOG(ERR, "configure irq failed");
- goto err_queue;
+ goto error;
}
/* re-enable intr again, because efd assign may change */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
@@ -1066,14 +1066,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_start_queues(dev) != 0) {
PMD_DRV_LOG(ERR, "enable queues failed");
- goto err_mac;
+ goto error;
}
return 0;
-err_mac:
- iavf_add_del_all_mac_addr(adapter, false);
-err_queue:
+error:
return -1;
}
@@ -1102,16 +1100,6 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Rx interrupt vector mapping free */
rte_intr_vec_list_free(intr_handle);
- /* adminq will be disabled when vf is resetting. */
- if (!vf->in_reset_recovery) {
- /* remove all mac addrs */
- iavf_add_del_all_mac_addr(adapter, false);
-
- /* remove all multicast addresses */
- iavf_add_del_mc_addr_list(adapter, vf->mc_addrs, vf->mc_addrs_num,
- false);
- }
-
iavf_stop_queues(dev);
adapter->stopped = 1;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 8ca104c04e..71be87845a 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -710,6 +710,7 @@ iavf_get_vf_resource(struct iavf_adapter *adapter)
VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF |
VIRTCHNL_VF_OFFLOAD_FSUB_PF |
VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+ VIRTCHNL_VF_OFFLOAD_USO |
VIRTCHNL_VF_OFFLOAD_CRC |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VIRTCHNL_VF_LARGE_NUM_QPAIRS |
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.886694643 +0800
+++ 0103-net-iavf-preserve-MAC-address-with-i40e-PF-Linux-dri.patch 2024-11-11 14:23:05.292192836 +0800
@@ -1 +1 @@
-From 3d42086def307be853d1e2e5b9d1e76725c3661f Mon Sep 17 00:00:00 2001
+From 819d57cd27c47941913084143eb32d1cb3a8bf20 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3d42086def307be853d1e2e5b9d1e76725c3661f ]
@@ -27,2 +29,0 @@
-Cc: stable@dpdk.org
-
@@ -40 +41 @@
-index c200f63b4f..7f80cd6258 100644
+index 1a98c7734c..9f3658c48b 100644
@@ -43 +44 @@
-@@ -1044,7 +1044,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
+@@ -1033,7 +1033,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
@@ -52 +53 @@
-@@ -1052,12 +1052,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
+@@ -1041,12 +1041,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
@@ -67 +68 @@
-@@ -1077,14 +1077,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
+@@ -1066,14 +1066,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
@@ -84 +85 @@
-@@ -1113,16 +1111,6 @@ iavf_dev_stop(struct rte_eth_dev *dev)
+@@ -1102,16 +1100,6 @@ iavf_dev_stop(struct rte_eth_dev *dev)
@@ -102 +103 @@
-index 69420bc9b6..065ab3594c 100644
+index 8ca104c04e..71be87845a 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: workaround list management of Rx queue control' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (102 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
` (16 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Bing Zhao; +Cc: xuemingl, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=789faccc7ec15de2004468416d46ea1184c68f25
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 789faccc7ec15de2004468416d46ea1184c68f25 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Tue, 23 Jul 2024 14:14:11 +0300
Subject: [PATCH] net/mlx5: workaround list management of Rx queue control
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f957ac99643535fd218753f4f956fc9c5aadd23c ]
The LIST_REMOVE macro only removes the entry from the list and
updates list itself. The pointers of this entry are not reset to
NULL to prevent the accessing for the 2nd time.
In the previous fix for the memory accessing, the "rxq_ctrl" was
removed from the list in a device private data when the "refcnt" was
decreased to 0. Under only shared or non-shared queues scenarios,
this was safe since all the "rxq_ctrl" entries were freed or kept.
There is one case that shared and non-shared Rx queues are configured
simultaneously, for example, a hairpin Rx queue cannot be shared.
When closing the port that allocated the shared Rx queues'
"rxq_ctrl", if the next entry is hairpin "rxq_ctrl", the hairpin
"rxq_ctrl" will be freed directly with other resources. When trying
to close the another port sharing the "rxq_ctrl", the LIST_REMOVE
will be called again and cause some UFA issue. If the memory is no
longer mapped, there will be a SIGSEGV.
Adding a flag in the Rx queue private structure to remove the
"rxq_ctrl" from the list only on the port/queue that allocated it.
Fixes: bcc220cb57d7 ("net/mlx5: fix shared Rx queue list management")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_rx.h | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 ++++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index d0ceae72ea..08ab0a042d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -173,6 +173,7 @@ struct mlx5_rxq_ctrl {
/* RX queue private data. */
struct mlx5_rxq_priv {
uint16_t idx; /* Queue index. */
+ bool possessor; /* Shared rxq_ctrl allocated for the 1st time. */
uint32_t refcnt; /* Reference counter. */
struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 1bb036afeb..e45cca9133 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -938,6 +938,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rte_errno = ENOMEM;
return -rte_errno;
}
+ rxq->possessor = true;
}
rxq->priv = priv;
rxq->idx = idx;
@@ -2016,6 +2017,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
tmpl->rxq.idx = idx;
rxq->hairpin_conf = *hairpin_conf;
+ rxq->possessor = true;
mlx5_rxq_ref(dev, idx);
LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
return tmpl;
@@ -2283,7 +2285,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
RTE_ETH_QUEUE_STATE_STOPPED;
}
} else { /* Refcnt zero, closing device. */
- LIST_REMOVE(rxq_ctrl, next);
+ if (rxq->possessor)
+ LIST_REMOVE(rxq_ctrl, next);
LIST_REMOVE(rxq, owner_entry);
if (LIST_EMPTY(&rxq_ctrl->owners)) {
if (!rxq_ctrl->is_hairpin)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.970133241 +0800
+++ 0104-net-mlx5-workaround-list-management-of-Rx-queue-cont.patch 2024-11-11 14:23:05.292192836 +0800
@@ -1 +1 @@
-From f957ac99643535fd218753f4f956fc9c5aadd23c Mon Sep 17 00:00:00 2001
+From 789faccc7ec15de2004468416d46ea1184c68f25 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f957ac99643535fd218753f4f956fc9c5aadd23c ]
@@ -28 +30,0 @@
-Cc: stable@dpdk.org
@@ -38 +40 @@
-index 7d144921ab..9bcb43b007 100644
+index d0ceae72ea..08ab0a042d 100644
@@ -46 +48 @@
- RTE_ATOMIC(uint32_t) refcnt; /* Reference counter. */
+ uint32_t refcnt; /* Reference counter. */
@@ -50 +52 @@
-index f13fc3b353..c6655b7db4 100644
+index 1bb036afeb..e45cca9133 100644
@@ -61 +63 @@
-@@ -2015,6 +2016,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+@@ -2016,6 +2017,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
@@ -69 +71 @@
-@@ -2282,7 +2284,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+@@ -2283,7 +2285,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5/hws: fix flex item as tunnel header' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (103 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
` (15 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f64e2a1e86b8b85627b7ce9563278eff71e26c8b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f64e2a1e86b8b85627b7ce9563278eff71e26c8b Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:17 +0300
Subject: [PATCH] net/mlx5/hws: fix flex item as tunnel header
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 624ca89b57550f13c49224d931d391680dc62d69 ]
The RTE flex item can represent the tunnel header and
split the inner and outer layer items. HWS did not
support this flex item specifics.
Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 031e87bc0c..1b8cb18d63 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -2536,8 +2536,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
break;
case RTE_FLOW_ITEM_TYPE_FLEX:
ret = mlx5dr_definer_conv_item_flex_parser(&cd, items, i);
- item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
- MLX5_FLOW_ITEM_OUTER_FLEX;
+ if (ret == 0) {
+ enum rte_flow_item_flex_tunnel_mode tunnel_mode =
+ FLEX_TUNNEL_MODE_SINGLE;
+
+ ret = mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+ if (tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL)
+ item_flags |= MLX5_FLOW_ITEM_FLEX_TUNNEL;
+ else
+ item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+ MLX5_FLOW_ITEM_OUTER_FLEX;
+ }
break;
case RTE_FLOW_ITEM_TYPE_MPLS:
ret = mlx5dr_definer_conv_item_mpls(&cd, items, i);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.026742840 +0800
+++ 0105-net-mlx5-hws-fix-flex-item-as-tunnel-header.patch 2024-11-11 14:23:05.302192836 +0800
@@ -1 +1 @@
-From 624ca89b57550f13c49224d931d391680dc62d69 Mon Sep 17 00:00:00 2001
+From f64e2a1e86b8b85627b7ce9563278eff71e26c8b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 624ca89b57550f13c49224d931d391680dc62d69 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 51a3f7be4b..2dfcc5eba6 100644
+index 031e87bc0c..1b8cb18d63 100644
@@ -23 +25 @@
-@@ -3267,8 +3267,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
+@@ -2536,8 +2536,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: add flex item query for tunnel mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (104 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
` (14 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cb33b62e095707f9ad4c014f83e60359f06af2ec
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cb33b62e095707f9ad4c014f83e60359f06af2ec Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:16 +0300
Subject: [PATCH] net/mlx5: add flex item query for tunnel mode
Cc: Xueming Li <xuemingl@nvidia.com>
[upstream commit 850233aca685ed1142ae2003ec6d4eefe82df4bd]
Once parsing the RTE item array the PMD needs to know
whether the flex item represents the tunnel header.
The appropriate tunnel mode query API is added.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 2 ++
drivers/net/mlx5/mlx5_flow_flex.c | 27 +++++++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 0c81bcab9f..f48b58dcf4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2498,6 +2498,8 @@ int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
void *flex, uint32_t byte_off,
bool is_mask, bool tunnel, uint32_t *value);
+int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+ enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
struct rte_flow_item_flex_handle *handle,
bool acquire);
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 4ae03a23f1..e7e6358144 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -291,6 +291,33 @@ mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
return 0;
}
+/**
+ * Get the flex parser tunnel mode.
+ *
+ * @param[in] item
+ * RTE Flex item.
+ * @param[in, out] tunnel_mode
+ * Pointer to return tunnel mode.
+ *
+ * @return
+ * 0 on success, otherwise negative error code.
+ */
+int
+mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+ enum rte_flow_item_flex_tunnel_mode *tunnel_mode)
+{
+ if (item && item->spec && tunnel_mode) {
+ const struct rte_flow_item_flex *spec = item->spec;
+ struct mlx5_flex_item *flex = (struct mlx5_flex_item *)spec->handle;
+
+ if (flex) {
+ *tunnel_mode = flex->tunnel_mode;
+ return 0;
+ }
+ }
+ return -EINVAL;
+}
+
/**
* Translate item pattern into matcher fields according to translation
* array.
--
2.34.1
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: fix flex item tunnel mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (105 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
` (13 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=06b70793f9f1885d71895b16d648be571c8045c1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 06b70793f9f1885d71895b16d648be571c8045c1 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:18 +0300
Subject: [PATCH] net/mlx5: fix flex item tunnel mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e46b26663de964b54ed9fc2e7eade07261d8e396 ]
The RTE flex item can represent tunnel header itself,
and split inner and outer items, it should be reflected
in the item flags while PMD is processing the item array.
Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index af4df13b2f..ca2611942e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -394,6 +394,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
uint64_t last_item = 0;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ enum rte_flow_item_flex_tunnel_mode tunnel_mode = FLEX_TUNNEL_MODE_SINGLE;
int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
int item_type = items->type;
@@ -439,6 +440,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
case RTE_FLOW_ITEM_TYPE_GTP:
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ break;
+ case RTE_FLOW_ITEM_TYPE_FLEX:
+ mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+ last_item = tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL ?
+ MLX5_FLOW_ITEM_FLEX_TUNNEL :
+ tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+ MLX5_FLOW_ITEM_OUTER_FLEX;
default:
break;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.103428639 +0800
+++ 0107-net-mlx5-fix-flex-item-tunnel-mode.patch 2024-11-11 14:23:05.312192836 +0800
@@ -1 +1 @@
-From e46b26663de964b54ed9fc2e7eade07261d8e396 Mon Sep 17 00:00:00 2001
+From 06b70793f9f1885d71895b16d648be571c8045c1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e46b26663de964b54ed9fc2e7eade07261d8e396 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 279ffdc03a..986fd8a93d 100644
+index af4df13b2f..ca2611942e 100644
@@ -23 +25 @@
-@@ -558,6 +558,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
+@@ -394,6 +394,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
@@ -31,3 +33,3 @@
-@@ -606,6 +607,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
- case RTE_FLOW_ITEM_TYPE_COMPARE:
- last_item = MLX5_FLOW_ITEM_COMPARE;
+@@ -439,6 +440,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ last_item = MLX5_FLOW_LAYER_GTP;
@@ -34,0 +37 @@
++ break;
@@ -41 +43,0 @@
-+ break;
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: fix number of supported flex parsers' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (106 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
` (12 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2ab85e98b8bf7c5e6f56d85a9e40f97106da4550
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2ab85e98b8bf7c5e6f56d85a9e40f97106da4550 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:19 +0300
Subject: [PATCH] net/mlx5: fix number of supported flex parsers
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 16d8f37b4ebb59a2b2d48dbd9c0f3b8302d4ab1f ]
The hardware supports up to 8 flex parser configurations.
Some of them can be utilized internally by firmware, depending on
the configured profile ("FLEX_PARSER_PROFILE_ENABLE" in NV-setting).
The firmware does not report in capabilities how many flex parser
configuration is remaining available (this is device-wide resource
and can be allocated runtime by other agents - kernel, DPDK
applications, etc.), and once there is no more available parsers
on the parse object creation moment firmware just returns an error.
Fixes: db25cadc0887 ("net/mlx5: add flex item operations")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f48b58dcf4..61f07f459b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -69,7 +69,7 @@
#define MLX5_ROOT_TBL_MODIFY_NUM 16
/* Maximal number of flex items created on the port.*/
-#define MLX5_PORT_FLEX_ITEM_NUM 4
+#define MLX5_PORT_FLEX_ITEM_NUM 8
/* Maximal number of field/field parts to map into sample registers .*/
#define MLX5_FLEX_ITEM_MAPPING_NUM 32
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.153862138 +0800
+++ 0108-net-mlx5-fix-number-of-supported-flex-parsers.patch 2024-11-11 14:23:05.312192836 +0800
@@ -1 +1 @@
-From 16d8f37b4ebb59a2b2d48dbd9c0f3b8302d4ab1f Mon Sep 17 00:00:00 2001
+From 2ab85e98b8bf7c5e6f56d85a9e40f97106da4550 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 16d8f37b4ebb59a2b2d48dbd9c0f3b8302d4ab1f ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 5f7bfcd613..399923b443 100644
+index f48b58dcf4..61f07f459b 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'app/testpmd: remove flex item init command leftover' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (107 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
` (11 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=179f82ff012e353903bad9c9b0451b09c6941600
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 179f82ff012e353903bad9c9b0451b09c6941600 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:20 +0300
Subject: [PATCH] app/testpmd: remove flex item init command leftover
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d5c50397a1cc06419970afbea9cd1c37e3c08a5b ]
There was a leftover of "flow flex init" command used
for debug purposes and had no useful functionality in
the production code.
Fixes: 59f3a8acbcdb ("app/testpmd: add flex item commands")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7e6e06a04f..4b13d84ad1 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -105,7 +105,6 @@ enum index {
HASH,
/* Flex arguments */
- FLEX_ITEM_INIT,
FLEX_ITEM_CREATE,
FLEX_ITEM_DESTROY,
@@ -1249,7 +1248,6 @@ struct parse_action_priv {
})
static const enum index next_flex_item[] = {
- FLEX_ITEM_INIT,
FLEX_ITEM_CREATE,
FLEX_ITEM_DESTROY,
ZERO,
@@ -3932,15 +3930,6 @@ static const struct token token_list[] = {
.next = NEXT(next_flex_item),
.call = parse_flex,
},
- [FLEX_ITEM_INIT] = {
- .name = "init",
- .help = "flex item init",
- .args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token),
- ARGS_ENTRY(struct buffer, port)),
- .next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN),
- NEXT_ENTRY(COMMON_PORT_ID)),
- .call = parse_flex
- },
[FLEX_ITEM_CREATE] = {
.name = "create",
.help = "flex item create",
@@ -10720,7 +10709,6 @@ parse_flex(struct context *ctx, const struct token *token,
switch (ctx->curr) {
default:
break;
- case FLEX_ITEM_INIT:
case FLEX_ITEM_CREATE:
case FLEX_ITEM_DESTROY:
out->command = ctx->curr;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.191913637 +0800
+++ 0109-app-testpmd-remove-flex-item-init-command-leftover.patch 2024-11-11 14:23:05.322192836 +0800
@@ -1 +1 @@
-From d5c50397a1cc06419970afbea9cd1c37e3c08a5b Mon Sep 17 00:00:00 2001
+From 179f82ff012e353903bad9c9b0451b09c6941600 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d5c50397a1cc06419970afbea9cd1c37e3c08a5b ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 5451b3a453..5f71e5ba44 100644
+index 7e6e06a04f..4b13d84ad1 100644
@@ -23 +25 @@
-@@ -106,7 +106,6 @@ enum index {
+@@ -105,7 +105,6 @@ enum index {
@@ -31 +33 @@
-@@ -1320,7 +1319,6 @@ struct parse_action_priv {
+@@ -1249,7 +1248,6 @@ struct parse_action_priv {
@@ -39 +41 @@
-@@ -4188,15 +4186,6 @@ static const struct token token_list[] = {
+@@ -3932,15 +3930,6 @@ static const struct token token_list[] = {
@@ -55 +57 @@
-@@ -11472,7 +11461,6 @@ parse_flex(struct context *ctx, const struct token *token,
+@@ -10720,7 +10709,6 @@ parse_flex(struct context *ctx, const struct token *token,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: fix next protocol validation after flex item' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (108 preceding siblings ...)
2024-11-11 6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
` (10 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e03621f4c4a16e42b913431bcb86caecd3240865
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e03621f4c4a16e42b913431bcb86caecd3240865 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:21 +0300
Subject: [PATCH] net/mlx5: fix next protocol validation after flex item
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3847a3b192315491118eab9830e695eb2c9946e2 ]
On the flow validation some items may check the preceding protocols.
In case of flex item the next protocol is opaque (or can be multiple
ones) we should set neutral value and allow successful validation,
for example, for the combination of flex and following ESP items.
Fixes: a23e9b6e3ee9 ("net/mlx5: handle flex item in flows")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 863737ceba..af6bf7e411 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7850,6 +7850,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
tunnel != 0, error);
if (ret < 0)
return ret;
+ /* Reset for next proto, it is unknown. */
+ next_protocol = 0xff;
break;
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
ret = flow_dv_validate_item_meter_color(dev, items,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.231892736 +0800
+++ 0110-net-mlx5-fix-next-protocol-validation-after-flex-ite.patch 2024-11-11 14:23:05.332192835 +0800
@@ -1 +1 @@
-From 3847a3b192315491118eab9830e695eb2c9946e2 Mon Sep 17 00:00:00 2001
+From e03621f4c4a16e42b913431bcb86caecd3240865 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3847a3b192315491118eab9830e695eb2c9946e2 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index e8ca2b3ed6..4451b114ae 100644
+index 863737ceba..af6bf7e411 100644
@@ -24 +26 @@
-@@ -8194,6 +8194,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7850,6 +7850,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: fix non full word sample fields in flex item' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (109 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
` (9 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=86088e4684e1c908882cb293d28d3b85226535dc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 86088e4684e1c908882cb293d28d3b85226535dc Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:22 +0300
Subject: [PATCH] net/mlx5: fix non full word sample fields in flex item
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 97e19f0762e5235d6914845a59823d4ea36925bb ]
If the sample field in flex item did not cover the entire
32-bit word (width was not verified 32 bits) or was not aligned
on the byte boundary the match on this sample in flows
happened to be ignored or wrongly missed. The field mask
"def" was build in wrong endianness, and non-byte aligned
shifts were wrongly performed for the pattern masks and values.
Fixes: 6dac7d7ff2bf ("net/mlx5: translate flex item pattern into matcher")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 4 +--
drivers/net/mlx5/mlx5.h | 5 ++-
drivers/net/mlx5/mlx5_flow_dv.c | 5 ++-
drivers/net/mlx5/mlx5_flow_flex.c | 47 +++++++++++++--------------
4 files changed, 29 insertions(+), 32 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 1b8cb18d63..daee2b6eb7 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -436,7 +436,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
idx = fc->fname - MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
byte_off -= idx * sizeof(uint32_t);
ret = mlx5_flex_get_parser_value_per_byte_off(flex, flex->handle, byte_off,
- false, is_inner, &val);
+ is_inner, &val);
if (ret == -1 || !val)
return;
@@ -2314,7 +2314,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
byte_off = base_off - i * sizeof(uint32_t);
ret = mlx5_flex_get_parser_value_per_byte_off(m, v->handle, byte_off,
- true, is_inner, &mask);
+ is_inner, &mask);
if (ret == -1) {
rte_errno = EINVAL;
return rte_errno;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 61f07f459b..bce1d9e749 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2493,11 +2493,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
void *key, const struct rte_flow_item *item,
bool is_inner);
int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
- uint32_t idx, uint32_t *pos,
- bool is_inner, uint32_t *def);
+ uint32_t idx, uint32_t *pos, bool is_inner);
int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
void *flex, uint32_t byte_off,
- bool is_mask, bool tunnel, uint32_t *value);
+ bool tunnel, uint32_t *value);
int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index af6bf7e411..b447b1598a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1481,7 +1481,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
const struct mlx5_flex_pattern_field *map;
uint32_t offset = data->offset;
uint32_t width_left = width;
- uint32_t def;
uint32_t cur_width = 0;
uint32_t tmp_ofs;
uint32_t idx = 0;
@@ -1506,7 +1505,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
tmp_ofs = pos < data->offset ? data->offset - pos : 0;
for (j = i; i < flex->mapnum && width_left > 0; ) {
map = flex->map + i;
- id = mlx5_flex_get_sample_id(flex, i, &pos, false, &def);
+ id = mlx5_flex_get_sample_id(flex, i, &pos, false);
if (id == -1) {
i++;
/* All left length is dummy */
@@ -1525,7 +1524,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
* 2. Width has been covered.
*/
for (j = i + 1; j < flex->mapnum; j++) {
- tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false, &def);
+ tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false);
if (tmp_id == -1) {
i = j;
pos -= flex->map[j].width;
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index e7e6358144..c5dd323fa2 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -118,28 +118,32 @@ mlx5_flex_get_bitfield(const struct rte_flow_item_flex *item,
uint32_t pos, uint32_t width, uint32_t shift)
{
const uint8_t *ptr = item->pattern + pos / CHAR_BIT;
- uint32_t val, vbits;
+ uint32_t val, vbits, skip = pos % CHAR_BIT;
/* Proceed the bitfield start byte. */
MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT && width);
MLX5_ASSERT(width + shift <= sizeof(uint32_t) * CHAR_BIT);
if (item->length <= pos / CHAR_BIT)
return 0;
- val = *ptr++ >> (pos % CHAR_BIT);
+ /* Bits are enumerated in byte in network order: 01234567 */
+ val = *ptr++;
vbits = CHAR_BIT - pos % CHAR_BIT;
- pos = (pos + vbits) / CHAR_BIT;
+ pos = RTE_ALIGN_CEIL(pos, CHAR_BIT) / CHAR_BIT;
vbits = RTE_MIN(vbits, width);
- val &= RTE_BIT32(vbits) - 1;
+ /* Load bytes to cover the field width, checking pattern boundary */
while (vbits < width && pos < item->length) {
uint32_t part = RTE_MIN(width - vbits, (uint32_t)CHAR_BIT);
uint32_t tmp = *ptr++;
- pos++;
- tmp &= RTE_BIT32(part) - 1;
- val |= tmp << vbits;
+ val |= tmp << RTE_ALIGN_CEIL(vbits, CHAR_BIT);
vbits += part;
+ pos++;
}
- return rte_bswap32(val <<= shift);
+ val = rte_cpu_to_be_32(val);
+ val <<= skip;
+ val >>= shift;
+ val &= (RTE_BIT64(width) - 1) << (sizeof(uint32_t) * CHAR_BIT - shift - width);
+ return val;
}
#define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \
@@ -211,21 +215,17 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v,
* Where to search the value and mask.
* @param[in] is_inner
* For inner matching or not.
- * @param[in, def] def
- * Mask generated by mapping shift and width.
*
* @return
* 0 on success, -1 to ignore.
*/
int
mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
- uint32_t idx, uint32_t *pos,
- bool is_inner, uint32_t *def)
+ uint32_t idx, uint32_t *pos, bool is_inner)
{
const struct mlx5_flex_pattern_field *map = tp->map + idx;
uint32_t id = map->reg_id;
- *def = (RTE_BIT64(map->width) - 1) << map->shift;
/* Skip placeholders for DUMMY fields. */
if (id == MLX5_INVALID_SAMPLE_REG_ID) {
*pos += map->width;
@@ -252,8 +252,6 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
* Mlx5 flex item sample mapping handle.
* @param[in] byte_off
* Mlx5 flex item format_select_dw.
- * @param[in] is_mask
- * Spec or mask.
* @param[in] tunnel
* Tunnel mode or not.
* @param[in, def] value
@@ -265,25 +263,23 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
int
mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
void *flex, uint32_t byte_off,
- bool is_mask, bool tunnel, uint32_t *value)
+ bool tunnel, uint32_t *value)
{
struct mlx5_flex_pattern_field *map;
struct mlx5_flex_item *tp = flex;
- uint32_t def, i, pos, val;
+ uint32_t i, pos, val;
int id;
*value = 0;
for (i = 0, pos = 0; i < tp->mapnum && pos < item->length * CHAR_BIT; i++) {
map = tp->map + i;
- id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel, &def);
+ id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel);
if (id == -1)
continue;
if (id >= (int)tp->devx_fp->num_samples || id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
return -1;
if (byte_off == tp->devx_fp->sample_info[id].sample_dw_data * sizeof(uint32_t)) {
val = mlx5_flex_get_bitfield(item, pos, map->width, map->shift);
- if (is_mask)
- val &= RTE_BE32(def);
*value |= val;
}
pos += map->width;
@@ -355,10 +351,10 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
spec = item->spec;
mask = item->mask;
tp = (struct mlx5_flex_item *)spec->handle;
- for (i = 0; i < tp->mapnum; i++) {
+ for (i = 0; i < tp->mapnum && pos < (spec->length * CHAR_BIT); i++) {
struct mlx5_flex_pattern_field *map = tp->map + i;
uint32_t val, msk, def;
- int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner, &def);
+ int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner);
if (id == -1)
continue;
@@ -366,11 +362,14 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
if (id >= (int)tp->devx_fp->num_samples ||
id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
return;
+ def = (uint32_t)(RTE_BIT64(map->width) - 1);
+ def <<= (sizeof(uint32_t) * CHAR_BIT - map->shift - map->width);
val = mlx5_flex_get_bitfield(spec, pos, map->width, map->shift);
- msk = mlx5_flex_get_bitfield(mask, pos, map->width, map->shift);
+ msk = pos < (mask->length * CHAR_BIT) ?
+ mlx5_flex_get_bitfield(mask, pos, map->width, map->shift) : def;
sample_id = tp->devx_fp->sample_ids[id];
mlx5_flex_set_match_sample(misc4_m, misc4_v,
- def, msk & def, val & msk & def,
+ def, msk, val & msk,
sample_id, id);
pos += map->width;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.293928835 +0800
+++ 0111-net-mlx5-fix-non-full-word-sample-fields-in-flex-ite.patch 2024-11-11 14:23:05.342192835 +0800
@@ -1 +1 @@
-From 97e19f0762e5235d6914845a59823d4ea36925bb Mon Sep 17 00:00:00 2001
+From 86088e4684e1c908882cb293d28d3b85226535dc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 97e19f0762e5235d6914845a59823d4ea36925bb ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 2dfcc5eba6..10b986d66b 100644
+index 1b8cb18d63..daee2b6eb7 100644
@@ -29 +31 @@
-@@ -574,7 +574,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
+@@ -436,7 +436,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
@@ -38 +40 @@
-@@ -2825,7 +2825,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
+@@ -2314,7 +2314,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
@@ -48 +50 @@
-index 399923b443..18b4c15a26 100644
+index 61f07f459b..bce1d9e749 100644
@@ -51 +53 @@
-@@ -2602,11 +2602,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
+@@ -2493,11 +2493,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
@@ -66 +68 @@
-index 4451b114ae..5f71573a86 100644
+index af6bf7e411..b447b1598a 100644
@@ -69 +71 @@
-@@ -1526,7 +1526,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
+@@ -1481,7 +1481,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
@@ -77 +79 @@
-@@ -1551,7 +1550,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
+@@ -1506,7 +1505,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
@@ -86 +88 @@
-@@ -1570,7 +1569,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
+@@ -1525,7 +1524,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
@@ -96 +98 @@
-index 0c41b956b0..bf38643a23 100644
+index e7e6358144..c5dd323fa2 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/mlx5: fix flex item header length field translation' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (110 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
` (8 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ab563e0a3ea36bdd4139714eafd59129619d88d8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ab563e0a3ea36bdd4139714eafd59129619d88d8 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:23 +0300
Subject: [PATCH] net/mlx5: fix flex item header length field translation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b04b06f4cb3f3bdd24228f3ca2ec5b3a7b64308d ]
There are hardware imposed limitations on the header length
field description for the mask and shift combinations in the
FIELD_MODE_OFFSET mode.
The patch updates:
- parameter check for FIELD_MODE_OFFSET for the header length
field
- check whether length field crosses dword boundaries in header
- correct mask extension to the hardware required width 6-bits
- correct adjusting the mask left margin offset, preventing
dword offset
Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_flex.c | 120 ++++++++++++++++--------------
1 file changed, 66 insertions(+), 54 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index c5dd323fa2..58d8c61443 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -449,12 +449,14 @@ mlx5_flex_release_index(struct rte_eth_dev *dev,
*
* shift mask
* ------- ---------------
- * 0 b111100 0x3C
- * 1 b111110 0x3E
- * 2 b111111 0x3F
- * 3 b011111 0x1F
- * 4 b001111 0x0F
- * 5 b000111 0x07
+ * 0 b11111100 0x3C
+ * 1 b01111110 0x3E
+ * 2 b00111111 0x3F
+ * 3 b00011111 0x1F
+ * 4 b00001111 0x0F
+ * 5 b00000111 0x07
+ * 6 b00000011 0x03
+ * 7 b00000001 0x01
*/
static uint8_t
mlx5_flex_hdr_len_mask(uint8_t shift,
@@ -464,8 +466,7 @@ mlx5_flex_hdr_len_mask(uint8_t shift,
int diff = shift - MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
base_mask = mlx5_hca_parse_graph_node_base_hdr_len_mask(attr);
- return diff == 0 ? base_mask :
- diff < 0 ? (base_mask << -diff) & base_mask : base_mask >> diff;
+ return diff < 0 ? base_mask << -diff : base_mask >> diff;
}
static int
@@ -476,7 +477,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
{
const struct rte_flow_item_flex_field *field = &conf->next_header;
struct mlx5_devx_graph_node_attr *node = &devx->devx_conf;
- uint32_t len_width, mask;
if (field->field_base % CHAR_BIT)
return rte_flow_error_set
@@ -504,7 +504,14 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
"negative header length field base (FIXED)");
node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
break;
- case FIELD_MODE_OFFSET:
+ case FIELD_MODE_OFFSET: {
+ uint32_t msb, lsb;
+ int32_t shift = field->offset_shift;
+ uint32_t offset = field->offset_base;
+ uint32_t mask = field->offset_mask;
+ uint32_t wmax = attr->header_length_mask_width +
+ MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
+
if (!(attr->header_length_mode &
RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD)))
return rte_flow_error_set
@@ -514,47 +521,73 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
"field size is a must for offset mode");
- if (field->field_size + field->offset_base < attr->header_length_mask_width)
+ if ((offset ^ (field->field_size + offset)) >> 5)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "field size plus offset_base is too small");
- node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
- if (field->offset_mask == 0 ||
- !rte_is_power_of_2(field->offset_mask + 1))
+ "field crosses the 32-bit word boundary");
+ /* Hardware counts in dwords, all shifts done by offset within mask */
+ if (shift < 0 || (uint32_t)shift >= wmax)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "header length field shift exceeds limits (OFFSET)");
+ if (!mask)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "zero length field offset mask (OFFSET)");
+ msb = rte_fls_u32(mask) - 1;
+ lsb = rte_bsf32(mask);
+ if (!rte_is_power_of_2((mask >> lsb) + 1))
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "invalid length field offset mask (OFFSET)");
- len_width = rte_fls_u32(field->offset_mask);
- if (len_width > attr->header_length_mask_width)
+ "length field offset mask not contiguous (OFFSET)");
+ if (msb >= field->field_size)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "length field offset mask too wide (OFFSET)");
- mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
- if (mask < field->offset_mask)
+ "length field offset mask exceeds field size (OFFSET)");
+ if (msb >= wmax)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "length field shift too big (OFFSET)");
- node->header_length_field_mask = RTE_MIN(mask,
- field->offset_mask);
+ "length field offset mask exceeds supported width (OFFSET)");
+ if (mask & ~mlx5_flex_hdr_len_mask(shift, attr))
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "mask and shift combination not supported (OFFSET)");
+ msb++;
+ offset += field->field_size - msb;
+ if (msb < attr->header_length_mask_width) {
+ if (attr->header_length_mask_width - msb > offset)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "field size plus offset_base is too small");
+ offset += msb;
+ /*
+ * Here we can move to preceding dword. Hardware does
+ * cyclic left shift so we should avoid this and stay
+ * at current dword offset.
+ */
+ offset = (offset & ~0x1Fu) |
+ ((offset - attr->header_length_mask_width) & 0x1F);
+ }
+ node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+ node->header_length_field_mask = mask;
+ node->header_length_field_shift = shift;
+ node->header_length_field_offset = offset;
break;
+ }
case FIELD_MODE_BITMASK:
if (!(attr->header_length_mode &
RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK)))
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
"unsupported header length field mode (BITMASK)");
- if (attr->header_length_mask_width < field->field_size)
+ if (field->offset_shift > 15 || field->offset_shift < 0)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "header length field width exceeds limit");
+ "header length field shift exceeds limit (BITMASK)");
node->header_length_mode = MLX5_GRAPH_NODE_LEN_BITMASK;
- mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
- if (mask < field->offset_mask)
- return rte_flow_error_set
- (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "length field shift too big (BITMASK)");
- node->header_length_field_mask = RTE_MIN(mask,
- field->offset_mask);
+ node->header_length_field_mask = field->offset_mask;
+ node->header_length_field_shift = field->offset_shift;
+ node->header_length_field_offset = field->offset_base;
break;
default:
return rte_flow_error_set
@@ -567,27 +600,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
"header length field base exceeds limit");
node->header_length_base_value = field->field_base / CHAR_BIT;
- if (field->field_mode == FIELD_MODE_OFFSET ||
- field->field_mode == FIELD_MODE_BITMASK) {
- if (field->offset_shift > 15 || field->offset_shift < 0)
- return rte_flow_error_set
- (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "header length field shift exceeds limit");
- node->header_length_field_shift = field->offset_shift;
- node->header_length_field_offset = field->offset_base;
- }
- if (field->field_mode == FIELD_MODE_OFFSET) {
- if (field->field_size > attr->header_length_mask_width) {
- node->header_length_field_offset +=
- field->field_size - attr->header_length_mask_width;
- } else if (field->field_size < attr->header_length_mask_width) {
- node->header_length_field_offset -=
- attr->header_length_mask_width - field->field_size;
- node->header_length_field_mask =
- RTE_MIN(node->header_length_field_mask,
- (1u << field->field_size) - 1);
- }
- }
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.363397633 +0800
+++ 0112-net-mlx5-fix-flex-item-header-length-field-translati.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From b04b06f4cb3f3bdd24228f3ca2ec5b3a7b64308d Mon Sep 17 00:00:00 2001
+From ab563e0a3ea36bdd4139714eafd59129619d88d8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b04b06f4cb3f3bdd24228f3ca2ec5b3a7b64308d ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index bf38643a23..afed16985a 100644
+index c5dd323fa2..58d8c61443 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'build: remove version check on compiler links function' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (111 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
` (7 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Bruce Richardson
Cc: xuemingl, Robin Jarry, Ferruh Yigit, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b2d1e9cfcd735986ffef0ec88d604971a8172708
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b2d1e9cfcd735986ffef0ec88d604971a8172708 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 20 Sep 2024 13:57:34 +0100
Subject: [PATCH] build: remove version check on compiler links function
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2909f9afbfd1b54ace204d40d57b68e6058aca28 ]
The "compiler.links()" function meson documentation [1] is a little
unclear, in a casual reading implies that the function was new in 0.60
meson release. In fact, it is only enhanced as described in that
release, but is present earlier.
As such, we can remove the version checks preceding the calls to links
function in our code.
[1] https://mesonbuild.com/Reference-manual_returned_compiler.html#compilerlinks
Fixes: fd809737cf8c ("common/qat: fix build with incompatible IPsec library")
Fixes: fb94d8243894 ("crypto/ipsec_mb: add dependency check for cross build")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Robin Jarry <rjarry@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/common/qat/meson.build | 2 +-
drivers/crypto/ipsec_mb/meson.build | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 62abcb6fe3..3d28bd2af5 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -43,7 +43,7 @@ else
IMB_required_ver = '1.4.0'
IMB_header = '#include<intel-ipsec-mb.h>'
libipsecmb = cc.find_library('IPSec_MB', required: false)
- if libipsecmb.found() and meson.version().version_compare('>=0.60') and cc.links(
+ if libipsecmb.found() and cc.links(
'int main(void) {return 0;}', dependencies: libipsecmb)
# version comes with quotes, so we split based on " and take the middle
imb_ver = cc.get_define('IMB_VERSION_STR',
diff --git a/drivers/crypto/ipsec_mb/meson.build b/drivers/crypto/ipsec_mb/meson.build
index 87bf965554..81631d3050 100644
--- a/drivers/crypto/ipsec_mb/meson.build
+++ b/drivers/crypto/ipsec_mb/meson.build
@@ -17,7 +17,7 @@ if not lib.found()
build = false
reason = 'missing dependency, "libIPSec_MB"'
# if the lib is found, check it's the right format
-elif meson.version().version_compare('>=0.60') and not cc.links(
+elif not cc.links(
'int main(void) {return 0;}', dependencies: lib)
build = false
reason = 'incompatible dependency, "libIPSec_MB"'
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.396636933 +0800
+++ 0113-build-remove-version-check-on-compiler-links-functio.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From 2909f9afbfd1b54ace204d40d57b68e6058aca28 Mon Sep 17 00:00:00 2001
+From b2d1e9cfcd735986ffef0ec88d604971a8172708 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2909f9afbfd1b54ace204d40d57b68e6058aca28 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index 3893b127dd..5a8de16fe0 100644
+index 62abcb6fe3..3d28bd2af5 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'hash: fix thash LFSR initialization' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (112 preceding siblings ...)
2024-11-11 6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
` (6 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c6474cb613a54d7d2c073080c183ce70c9634ecd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c6474cb613a54d7d2c073080c183ce70c9634ecd Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Fri, 6 Sep 2024 17:01:41 +0000
Subject: [PATCH] hash: fix thash LFSR initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ebf7f1188ea83d6154746e90d535392113ecb1e8 ]
Reverse polynomial for an LFSR was initialized improperly which
could generate improper bit sequence in some situations.
This patch implements proper polynomial reversing function.
Fixes: 28ebff11c2dc ("hash: add predictable RSS")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
lib/hash/rte_thash.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index 4ff567ee5a..a952006686 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -160,6 +160,30 @@ thash_get_rand_poly(uint32_t poly_degree)
RTE_DIM(irreducible_poly_table[poly_degree])];
}
+static inline uint32_t
+get_rev_poly(uint32_t poly, int degree)
+{
+ int i;
+ /*
+ * The implicit highest coefficient of the polynomial
+ * becomes the lowest after reversal.
+ */
+ uint32_t rev_poly = 1;
+ uint32_t mask = (1 << degree) - 1;
+
+ /*
+ * Here we assume "poly" argument is an irreducible polynomial,
+ * thus the lowest coefficient of the "poly" must always be equal to "1".
+ * After the reversal, this the lowest coefficient becomes the highest and
+ * it is omitted since the highest coefficient is implicitly determined by
+ * degree of the polynomial.
+ */
+ for (i = 1; i < degree; i++)
+ rev_poly |= ((poly >> i) & 0x1) << (degree - i);
+
+ return rev_poly & mask;
+}
+
static struct thash_lfsr *
alloc_lfsr(struct rte_thash_ctx *ctx)
{
@@ -179,7 +203,7 @@ alloc_lfsr(struct rte_thash_ctx *ctx)
lfsr->state = rte_rand() & ((1 << lfsr->deg) - 1);
} while (lfsr->state == 0);
/* init reverse order polynomial */
- lfsr->rev_poly = (lfsr->poly >> 1) | (1 << (lfsr->deg - 1));
+ lfsr->rev_poly = get_rev_poly(lfsr->poly, lfsr->deg);
/* init proper rev_state*/
lfsr->rev_state = lfsr->state;
for (i = 0; i <= lfsr->deg; i++)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.434207333 +0800
+++ 0114-hash-fix-thash-LFSR-initialization.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From ebf7f1188ea83d6154746e90d535392113ecb1e8 Mon Sep 17 00:00:00 2001
+From c6474cb613a54d7d2c073080c183ce70c9634ecd Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ebf7f1188ea83d6154746e90d535392113ecb1e8 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 10721effe6..99a685f0c8 100644
+index 4ff567ee5a..a952006686 100644
@@ -22 +24 @@
-@@ -166,6 +166,30 @@ thash_get_rand_poly(uint32_t poly_degree)
+@@ -160,6 +160,30 @@ thash_get_rand_poly(uint32_t poly_degree)
@@ -53 +55 @@
-@@ -185,7 +209,7 @@ alloc_lfsr(struct rte_thash_ctx *ctx)
+@@ -179,7 +203,7 @@ alloc_lfsr(struct rte_thash_ctx *ctx)
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: notify flower firmware about PF speed' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (113 preceding siblings ...)
2024-11-11 6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
` (5 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Zerun Fu; +Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ebec3137a58cbd31c0cf8483586527b1d48ae779
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ebec3137a58cbd31c0cf8483586527b1d48ae779 Mon Sep 17 00:00:00 2001
From: Zerun Fu <zerun.fu@corigine.com>
Date: Mon, 14 Oct 2024 10:43:55 +0800
Subject: [PATCH] net/nfp: notify flower firmware about PF speed
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2254813795099aa6c05caed5e8c0dcc7a8f03b4e ]
When using flower firmware, the VF speed is obtained from the
firmware and the firmware get the VF speed from the PF.
But the previous logic does not notify the firmware about PF speed,
and this cause VF speed to be unavailable.
Fix this by add the logic of notify firmware about PF speed.
Fixes: e1124c4f8a45 ("net/nfp: add flower representor framework")
Signed-off-by: Zerun Fu <zerun.fu@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_representor.c | 3 +++
drivers/net/nfp/nfp_net_common.c | 2 +-
drivers/net/nfp/nfp_net_common.h | 2 ++
3 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 23709acbba..ada28d07c6 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -33,6 +33,9 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
pf_hw = repr->app_fw_flower->pf_hw;
ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
+ if (repr->repr_type == NFP_REPR_TYPE_PF)
+ nfp_net_notify_port_speed(repr->app_fw_flower->pf_hw, link);
+
return ret;
}
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 134a9b807e..bf44373b26 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -166,7 +166,7 @@ nfp_net_link_speed_rte2nfp(uint32_t speed)
return NFP_NET_CFG_STS_LINK_RATE_UNKNOWN;
}
-static void
+void
nfp_net_notify_port_speed(struct nfp_net_hw *hw,
struct rte_eth_link *link)
{
diff --git a/drivers/net/nfp/nfp_net_common.h b/drivers/net/nfp/nfp_net_common.h
index 41d59bfa99..72286ab5c9 100644
--- a/drivers/net/nfp/nfp_net_common.h
+++ b/drivers/net/nfp/nfp_net_common.h
@@ -282,6 +282,8 @@ int nfp_net_flow_ctrl_set(struct rte_eth_dev *dev,
void nfp_pf_uninit(struct nfp_pf_dev *pf_dev);
uint32_t nfp_net_get_port_num(struct nfp_pf_dev *pf_dev,
struct nfp_eth_table *nfp_eth_table);
+void nfp_net_notify_port_speed(struct nfp_net_hw *hw,
+ struct rte_eth_link *link);
#define NFP_PRIV_TO_APP_FW_NIC(app_fw_priv)\
((struct nfp_app_fw_nic *)app_fw_priv)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.471975632 +0800
+++ 0115-net-nfp-notify-flower-firmware-about-PF-speed.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From 2254813795099aa6c05caed5e8c0dcc7a8f03b4e Mon Sep 17 00:00:00 2001
+From ebec3137a58cbd31c0cf8483586527b1d48ae779 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2254813795099aa6c05caed5e8c0dcc7a8f03b4e ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index eb0a02874b..eae6ba39e1 100644
+index 23709acbba..ada28d07c6 100644
@@ -31,3 +33,3 @@
-@@ -37,6 +37,9 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
-
- ret = nfp_net_link_update_common(dev, link, link->link_status);
+@@ -33,6 +33,9 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
+ pf_hw = repr->app_fw_flower->pf_hw;
+ ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
@@ -42 +44 @@
-index b986ed4622..f76d5a6895 100644
+index 134a9b807e..bf44373b26 100644
@@ -45,2 +47,2 @@
-@@ -184,7 +184,7 @@ nfp_net_link_speed_nfp2rte_check(uint32_t speed)
- return RTE_ETH_SPEED_NUM_NONE;
+@@ -166,7 +166,7 @@ nfp_net_link_speed_rte2nfp(uint32_t speed)
+ return NFP_NET_CFG_STS_LINK_RATE_UNKNOWN;
@@ -55 +57 @@
-index 8429db68f0..d4fe8338b9 100644
+index 41d59bfa99..72286ab5c9 100644
@@ -58,4 +60,4 @@
-@@ -383,6 +383,8 @@ int nfp_net_vf_config_app_init(struct nfp_net_hw *net_hw,
- bool nfp_net_version_check(struct nfp_hw *hw,
- struct nfp_pf_dev *pf_dev);
- void nfp_net_ctrl_bar_size_set(struct nfp_pf_dev *pf_dev);
+@@ -282,6 +282,8 @@ int nfp_net_flow_ctrl_set(struct rte_eth_dev *dev,
+ void nfp_pf_uninit(struct nfp_pf_dev *pf_dev);
+ uint32_t nfp_net_get_port_num(struct nfp_pf_dev *pf_dev,
+ struct nfp_eth_table *nfp_eth_table);
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/nfp: do not set IPv6 flag in transport mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (114 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
` (4 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Shihong Wang; +Cc: xuemingl, Long Wu, Peng Zhang, Chaoyong He, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0b75611d8ab73a141d4141de7ba56b927eb05414
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0b75611d8ab73a141d4141de7ba56b927eb05414 Mon Sep 17 00:00:00 2001
From: Shihong Wang <shihong.wang@corigine.com>
Date: Mon, 14 Oct 2024 10:43:56 +0800
Subject: [PATCH] net/nfp: do not set IPv6 flag in transport mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4f64ebdd41ce8bb60dba95589a5cc684fb9cb89c ]
The transport only encapsulates the security protocol header,
does not pay attention to the IP protocol type, and need not
to set the IPv6 flag.
Fixes: 3d21da66c06b ("net/nfp: create security session")
Signed-off-by: Shihong Wang <shihong.wang@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
drivers/net/nfp/nfp_ipsec.c | 15 ++-------------
1 file changed, 2 insertions(+), 13 deletions(-)
diff --git a/drivers/net/nfp/nfp_ipsec.c b/drivers/net/nfp/nfp_ipsec.c
index b10cda570b..56f3777226 100644
--- a/drivers/net/nfp/nfp_ipsec.c
+++ b/drivers/net/nfp/nfp_ipsec.c
@@ -1053,20 +1053,9 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
break;
case RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT:
- type = conf->ipsec.tunnel.type;
cfg->ctrl_word.mode = NFP_IPSEC_MODE_TRANSPORT;
- if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
- memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
- cfg->ipv6 = 0;
- } else if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
- memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
- memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
- cfg->ipv6 = 1;
- } else {
- PMD_DRV_LOG(ERR, "Unsupported address family!");
- return -EINVAL;
- }
+ memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
+ memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
break;
default:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.506232932 +0800
+++ 0116-net-nfp-do-not-set-IPv6-flag-in-transport-mode.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From 4f64ebdd41ce8bb60dba95589a5cc684fb9cb89c Mon Sep 17 00:00:00 2001
+From 0b75611d8ab73a141d4141de7ba56b927eb05414 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4f64ebdd41ce8bb60dba95589a5cc684fb9cb89c ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 647bc2bb6d..89116af1b2 100644
+index b10cda570b..56f3777226 100644
@@ -25 +27 @@
-@@ -1056,20 +1056,9 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
+@@ -1053,20 +1053,9 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'dmadev: fix potential null pointer access' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (115 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
` (3 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=43802f5b3729302977db17e78dbeac7ac3a09353
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 43802f5b3729302977db17e78dbeac7ac3a09353 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 12 Oct 2024 17:17:34 +0800
Subject: [PATCH] dmadev: fix potential null pointer access
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e5389d427ec43ab805d0a1caed89b63656fd7fde ]
When rte_dma_vchan_status(dev_id, vchan, NULL) is called, a null pointer
access is triggered.
This patch adds the null pointer checker.
Fixes: 5e0f85912754 ("dmadev: add channel status check for testing use")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
lib/dmadev/rte_dmadev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 5093c6e38b..a2e52cc8ff 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -731,7 +731,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *
{
struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
- if (!rte_dma_is_valid(dev_id))
+ if (!rte_dma_is_valid(dev_id) || status == NULL)
return -EINVAL;
if (vchan >= dev->data->dev_conf.nb_vchans) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.548093632 +0800
+++ 0117-dmadev-fix-potential-null-pointer-access.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From e5389d427ec43ab805d0a1caed89b63656fd7fde Mon Sep 17 00:00:00 2001
+From 43802f5b3729302977db17e78dbeac7ac3a09353 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e5389d427ec43ab805d0a1caed89b63656fd7fde ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 845727210f..60c3d8ebf6 100644
+index 5093c6e38b..a2e52cc8ff 100644
@@ -22 +24 @@
-@@ -741,7 +741,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *
+@@ -731,7 +731,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/gve/base: fix build with Fedora Rawhide' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (116 preceding siblings ...)
2024-11-11 6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
` (2 subsequent siblings)
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington; +Cc: xuemingl, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6f5a555e28d8578ac2ce1cecf373d5f4c02fcd2a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6f5a555e28d8578ac2ce1cecf373d5f4c02fcd2a Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Thu, 17 Oct 2024 16:42:33 -0700
Subject: [PATCH] net/gve/base: fix build with Fedora Rawhide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f0d9e787747dda0715654da9f0501f54fe105868 ]
Currently, a number of integer types are typedef'd to their corresponding
userspace or RTE values. This can be problematic if these types are
already defined somewhere else, as it would cause type collisions.
This patch changes the typedefs to #define macros which are only defined
if the types are not defined already.
Note: this was reported by OBS CI on 2024/10/17, when compiling DPDK
in Fedora Rawhide.
Fixes: c9ba2caf6302 ("net/gve/base: add OS-specific implementation")
Fixes: abf1242fbb84 ("net/gve: add struct members and typedefs for DQO")
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Joshua Washington <joshwash@google.com>
---
drivers/net/gve/base/gve_osdep.h | 48 ++++++++++++++++++++++++--------
1 file changed, 36 insertions(+), 12 deletions(-)
diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h
index a3702f4b8c..a6eb52306f 100644
--- a/drivers/net/gve/base/gve_osdep.h
+++ b/drivers/net/gve/base/gve_osdep.h
@@ -29,22 +29,46 @@
#include <sys/utsname.h>
#endif
-typedef uint8_t u8;
-typedef uint16_t u16;
-typedef uint32_t u32;
-typedef uint64_t u64;
+#ifndef u8
+#define u8 uint8_t
+#endif
+#ifndef u16
+#define u16 uint16_t
+#endif
+#ifndef u32
+#define u32 uint32_t
+#endif
+#ifndef u64
+#define u64 uint64_t
+#endif
-typedef rte_be16_t __sum16;
+#ifndef __sum16
+#define __sum16 rte_be16_t
+#endif
-typedef rte_be16_t __be16;
-typedef rte_be32_t __be32;
-typedef rte_be64_t __be64;
+#ifndef __be16
+#define __be16 rte_be16_t
+#endif
+#ifndef __be32
+#define __be32 rte_be32_t
+#endif
+#ifndef __be64
+#define __be64 rte_be64_t
+#endif
-typedef rte_le16_t __le16;
-typedef rte_le32_t __le32;
-typedef rte_le64_t __le64;
+#ifndef __le16
+#define __le16 rte_le16_t
+#endif
+#ifndef __le32
+#define __le32 rte_le32_t
+#endif
+#ifndef __le64
+#define __le64 rte_le64_t
+#endif
-typedef rte_iova_t dma_addr_t;
+#ifndef dma_addr_t
+#define dma_addr_t rte_iova_t
+#endif
#define ETH_MIN_MTU RTE_ETHER_MIN_MTU
#define ETH_ALEN RTE_ETHER_ADDR_LEN
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.581961132 +0800
+++ 0118-net-gve-base-fix-build-with-Fedora-Rawhide.patch 2024-11-11 14:23:05.362192835 +0800
@@ -1 +1 @@
-From f0d9e787747dda0715654da9f0501f54fe105868 Mon Sep 17 00:00:00 2001
+From 6f5a555e28d8578ac2ce1cecf373d5f4c02fcd2a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f0d9e787747dda0715654da9f0501f54fe105868 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index c0ee0d567c..64181cebd6 100644
+index a3702f4b8c..a6eb52306f 100644
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'power: fix mapped lcore ID' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (117 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
2024-11-11 6:28 ` patch '' " Xueming Li
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Sivaprasad Tummala; +Cc: xuemingl, Konstantin Ananyev, Huisong Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ff3018e2ffc9660ddd9770657dbf491442a34c7a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ff3018e2ffc9660ddd9770657dbf491442a34c7a Mon Sep 17 00:00:00 2001
From: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
Date: Fri, 18 Oct 2024 03:34:34 +0000
Subject: [PATCH] power: fix mapped lcore ID
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5c9b07eeba55d527025f1f4945e2dbb366f21215 ]
This commit fixes an issue in the power library
related to using lcores mapped to different
physical cores (--lcores option in EAL).
Previously, the power library incorrectly accessed
CPU sysfs attributes for power management, treating
lcore IDs as CPU IDs.
e.g. with --lcores '1@128', lcore_id '1' was interpreted
as CPU_id instead of '128'.
This patch corrects the cpu_id based on lcore and CPU
mappings. It also constraints power management support
for lcores mapped to multiple physical cores/threads.
When multiple lcores are mapped to the same physical core,
invoking frequency scaling APIs on any lcore will apply the
changes effectively.
Fixes: 53e54bf81700 ("eal: new option --lcores for cpu assignment")
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
app/test/test_power_cpufreq.c | 21 ++++++++++++++++++---
lib/power/power_acpi_cpufreq.c | 6 +++++-
lib/power/power_amd_pstate_cpufreq.c | 6 +++++-
lib/power/power_common.c | 22 ++++++++++++++++++++++
lib/power/power_common.h | 1 +
lib/power/power_cppc_cpufreq.c | 6 +++++-
lib/power/power_pstate_cpufreq.c | 6 +++++-
7 files changed, 61 insertions(+), 7 deletions(-)
diff --git a/app/test/test_power_cpufreq.c b/app/test/test_power_cpufreq.c
index 619b2811c6..edbd34424e 100644
--- a/app/test/test_power_cpufreq.c
+++ b/app/test/test_power_cpufreq.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <inttypes.h>
#include <rte_cycles.h>
+#include <rte_lcore.h>
#include "test.h"
@@ -46,9 +47,10 @@ test_power_caps(void)
static uint32_t total_freq_num;
static uint32_t freqs[TEST_POWER_FREQS_NUM_MAX];
+static uint32_t cpu_id;
static int
-check_cur_freq(unsigned int lcore_id, uint32_t idx, bool turbo)
+check_cur_freq(__rte_unused unsigned int lcore_id, uint32_t idx, bool turbo)
{
#define TEST_POWER_CONVERT_TO_DECIMAL 10
#define MAX_LOOP 100
@@ -62,13 +64,13 @@ check_cur_freq(unsigned int lcore_id, uint32_t idx, bool turbo)
int i;
if (snprintf(fullpath, sizeof(fullpath),
- TEST_POWER_SYSFILE_CPUINFO_FREQ, lcore_id) < 0) {
+ TEST_POWER_SYSFILE_CPUINFO_FREQ, cpu_id) < 0) {
return 0;
}
f = fopen(fullpath, "r");
if (f == NULL) {
if (snprintf(fullpath, sizeof(fullpath),
- TEST_POWER_SYSFILE_SCALING_FREQ, lcore_id) < 0) {
+ TEST_POWER_SYSFILE_SCALING_FREQ, cpu_id) < 0) {
return 0;
}
f = fopen(fullpath, "r");
@@ -497,6 +499,19 @@ test_power_cpufreq(void)
{
int ret = -1;
enum power_management_env env;
+ rte_cpuset_t lcore_cpus;
+
+ lcore_cpus = rte_lcore_cpuset(TEST_POWER_LCORE_ID);
+ if (CPU_COUNT(&lcore_cpus) != 1) {
+ printf("Power management doesn't support lcore %u mapping to %u CPUs\n",
+ TEST_POWER_LCORE_ID,
+ CPU_COUNT(&lcore_cpus));
+ return TEST_SKIPPED;
+ }
+ for (cpu_id = 0; cpu_id < CPU_SETSIZE; cpu_id++) {
+ if (CPU_ISSET(cpu_id, &lcore_cpus))
+ break;
+ }
/* Test initialisation of a valid lcore */
ret = rte_power_init(TEST_POWER_LCORE_ID);
diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c
index 8b55f19247..d860a12a8c 100644
--- a/lib/power/power_acpi_cpufreq.c
+++ b/lib/power/power_acpi_cpufreq.c
@@ -258,7 +258,11 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c
index dbd9d2b3ee..7b8e77003f 100644
--- a/lib/power/power_amd_pstate_cpufreq.c
+++ b/lib/power/power_amd_pstate_cpufreq.c
@@ -376,7 +376,11 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_common.c b/lib/power/power_common.c
index 1e09facb86..8ffb49ef8f 100644
--- a/lib/power/power_common.c
+++ b/lib/power/power_common.c
@@ -9,6 +9,7 @@
#include <rte_log.h>
#include <rte_string_fns.h>
+#include <rte_lcore.h>
#include "power_common.h"
@@ -202,3 +203,24 @@ out:
return ret;
}
+
+int power_get_lcore_mapped_cpu_id(uint32_t lcore_id, uint32_t *cpu_id)
+{
+ rte_cpuset_t lcore_cpus;
+ uint32_t cpu;
+
+ lcore_cpus = rte_lcore_cpuset(lcore_id);
+ if (CPU_COUNT(&lcore_cpus) != 1) {
+ RTE_LOG(ERR, POWER, "Power library does not support lcore %u mapping to %u CPUs", lcore_id,
+ CPU_COUNT(&lcore_cpus));
+ return -1;
+ }
+
+ for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
+ if (CPU_ISSET(cpu, &lcore_cpus))
+ break;
+ }
+ *cpu_id = cpu;
+
+ return 0;
+}
diff --git a/lib/power/power_common.h b/lib/power/power_common.h
index c1c7139276..b928df941f 100644
--- a/lib/power/power_common.h
+++ b/lib/power/power_common.h
@@ -27,5 +27,6 @@ int open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
int read_core_sysfs_u32(FILE *f, uint32_t *val);
int read_core_sysfs_s(FILE *f, char *buf, unsigned int len);
int write_core_sysfs_s(FILE *f, const char *str);
+int power_get_lcore_mapped_cpu_id(uint32_t lcore_id, uint32_t *cpu_id);
#endif /* _POWER_COMMON_H_ */
diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c
index f2ba684c83..add477c804 100644
--- a/lib/power/power_cppc_cpufreq.c
+++ b/lib/power/power_cppc_cpufreq.c
@@ -362,7 +362,11 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c
index 5ca5f60bcd..890875bd93 100644
--- a/lib/power/power_pstate_cpufreq.c
+++ b/lib/power/power_pstate_cpufreq.c
@@ -564,7 +564,11 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_performance(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.615056131 +0800
+++ 0119-power-fix-mapped-lcore-ID.patch 2024-11-11 14:23:05.362192835 +0800
@@ -1 +1 @@
-From 5c9b07eeba55d527025f1f4945e2dbb366f21215 Mon Sep 17 00:00:00 2001
+From ff3018e2ffc9660ddd9770657dbf491442a34c7a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5c9b07eeba55d527025f1f4945e2dbb366f21215 ]
@@ -25 +27,0 @@
-Cc: stable@dpdk.org
@@ -34 +36 @@
- lib/power/power_common.c | 23 +++++++++++++++++++++++
+ lib/power/power_common.c | 22 ++++++++++++++++++++++
@@ -38 +40 @@
- 7 files changed, 62 insertions(+), 7 deletions(-)
+ 7 files changed, 61 insertions(+), 7 deletions(-)
@@ -101 +103 @@
-index abad53bef1..ae809fbb60 100644
+index 8b55f19247..d860a12a8c 100644
@@ -104 +106 @@
-@@ -264,7 +264,11 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
+@@ -258,7 +258,11 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
@@ -110 +112 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
@@ -116 +118 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
@@ -118 +120 @@
-index 4809d45a22..2b728eca18 100644
+index dbd9d2b3ee..7b8e77003f 100644
@@ -121 +123 @@
-@@ -382,7 +382,11 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
+@@ -376,7 +376,11 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
@@ -127 +129 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
@@ -133 +135 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
@@ -135 +137 @@
-index 590986d5ef..b47c63a5f1 100644
+index 1e09facb86..8ffb49ef8f 100644
@@ -146 +148 @@
-@@ -204,3 +205,25 @@ out:
+@@ -202,3 +203,24 @@ out:
@@ -158,3 +160,2 @@
-+ POWER_LOG(ERR,
-+ "Power library does not support lcore %u mapping to %u CPUs",
-+ lcore_id, CPU_COUNT(&lcore_cpus));
++ RTE_LOG(ERR, POWER, "Power library does not support lcore %u mapping to %u CPUs", lcore_id,
++ CPU_COUNT(&lcore_cpus));
@@ -173 +174 @@
-index 83f742f42a..82fb94d0c0 100644
+index c1c7139276..b928df941f 100644
@@ -176 +177 @@
-@@ -31,5 +31,6 @@ int open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
+@@ -27,5 +27,6 @@ int open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
@@ -184 +185 @@
-index e73f4520d0..cc9305bdfe 100644
+index f2ba684c83..add477c804 100644
@@ -187 +188 @@
-@@ -368,7 +368,11 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
+@@ -362,7 +362,11 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
@@ -193 +194 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
@@ -199 +200 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
@@ -201 +202 @@
-index 1c2a91a178..4755909466 100644
+index 5ca5f60bcd..890875bd93 100644
@@ -204 +205 @@
-@@ -570,7 +570,11 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
+@@ -564,7 +564,11 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
@@ -210 +211 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
@@ -216 +217 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch 'net/ionic: fix build with Fedora Rawhide' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (118 preceding siblings ...)
2024-11-11 6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch '' " Xueming Li
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Timothy Redaelli; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e837f2e7286f9cae9004abae1b6b6f1c99a1b876
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e837f2e7286f9cae9004abae1b6b6f1c99a1b876 Mon Sep 17 00:00:00 2001
From: Timothy Redaelli <tredaelli@redhat.com>
Date: Thu, 24 Oct 2024 11:30:06 +0200
Subject: [PATCH] net/ionic: fix build with Fedora Rawhide
Cc: Xueming Li <xuemingl@nvidia.com>
Currently, a number of integer types are typedef'd to their corresponding
userspace or RTE values. This can be problematic if these types are
already defined somewhere else, as it would cause type collisions.
This patch changes the typedefs to #define macros which are only defined
if the types are not defined already.
Fixes: 5ef518098ec6 ("net/ionic: register and initialize adapter")
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
---
drivers/net/ionic/ionic_osdep.h | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ionic/ionic_osdep.h b/drivers/net/ionic/ionic_osdep.h
index 68f767b920..97188dfd59 100644
--- a/drivers/net/ionic/ionic_osdep.h
+++ b/drivers/net/ionic/ionic_osdep.h
@@ -30,14 +30,28 @@
#define __iomem
-typedef uint8_t u8;
-typedef uint16_t u16;
-typedef uint32_t u32;
-typedef uint64_t u64;
-
-typedef uint16_t __le16;
-typedef uint32_t __le32;
-typedef uint64_t __le64;
+#ifndef u8
+#define u8 uint8_t
+#endif
+#ifndef u16
+#define u16 uint16_t
+#endif
+#ifndef u32
+#define u32 uint32_t
+#endif
+#ifndef u64
+#define u64 uint64_t
+#endif
+
+#ifndef __le16
+#define __le16 rte_le16_t
+#endif
+#ifndef __le32
+#define __le32 rte_le32_t
+#endif
+#ifndef __le64
+#define __le64 rte_le64_t
+#endif
#define ioread8(reg) rte_read8(reg)
#define ioread32(reg) rte_read32(rte_le_to_cpu_32(reg))
--
2.34.1
^ permalink raw reply [flat|nested] 128+ messages in thread
* patch '' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (119 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
120 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
@ 2024-11-12 9:02 ` David Marchand
2024-11-12 11:35 ` Xueming Li
0 siblings, 1 reply; 128+ messages in thread
From: David Marchand @ 2024-11-12 9:02 UTC (permalink / raw)
To: Xueming Li; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
Hello Xueming,
On Mon, Nov 11, 2024 at 7:30 AM Xueming Li <xuemingl@nvidia.com> wrote:
>
> Hi,
>
> FYI, your patch has been queued to stable release 23.11.3
>
> Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
> It will be pushed if I get no objections before 11/30/24. So please
> shout if anyone has objections.
>
> Also note that after the patch there's a diff of the upstream commit vs the
> patch applied to the branch. This will indicate if there was any rebasing
> needed to apply to the stable branch. If there were code changes for rebasing
> (ie: not only metadata diffs), please double check that the rebase was
> correctly done.
>
> Queued patches are on a temporary branch at:
> https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
>
> This queued commit can be viewed at:
> https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b6d04ef865b12f884aaf475adc454184cefae753
>
> Thanks.
>
> Xueming Li <xuemingl@nvidia.com>
>
> ---
> From b6d04ef865b12f884aaf475adc454184cefae753 Mon Sep 17 00:00:00 2001
> From: David Marchand <david.marchand@redhat.com>
> Date: Fri, 17 Nov 2023 14:18:23 +0100
> Subject: [PATCH] log: add a per line log helper
> Cc: Xueming Li <xuemingl@nvidia.com>
>
> [upstream commit ab550c1d6a0893f00198017a3a0e7cd402a667fd]
>
> gcc builtin __builtin_strchr can be used as a static assertion to check
> whether passed format strings contain a \n.
> This can be useful to detect double \n in log messages.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Why do we want this change backported?
--
David Marchand
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 9:02 ` David Marchand
@ 2024-11-12 11:35 ` Xueming Li
2024-11-12 12:47 ` David Marchand
0 siblings, 1 reply; 128+ messages in thread
From: Xueming Li @ 2024-11-12 11:35 UTC (permalink / raw)
To: David Marchand; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 2384 bytes --]
Hi David,
________________________________
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, November 12, 2024 5:02 PM
> To: Xueming Li <xuemingl@nvidia.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
> Subject: Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
>
> Hello Xueming,
>
> On Mon, Nov 11, 2024 at 7:30 AM Xueming Li <xuemingl@nvidia.com> wrote:
> >
> > Hi,
> >
> > FYI, your patch has been queued to stable release 23.11.3
> >
> > Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
> > It will be pushed if I get no objections before 11/30/24. So please
> > shout if anyone has objections.
> >
> > Also note that after the patch there's a diff of the upstream commit vs the
> > patch applied to the branch. This will indicate if there was any rebasing
> > needed to apply to the stable branch. If there were code changes for rebasing
> > (ie: not only metadata diffs), please double check that the rebase was
> > correctly done.
> >
> > Queued patches are on a temporary branch at:
> > https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
> >
> > This queued commit can be viewed at:
> > https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b6d04ef865b12f884aaf475adc454184cefae753
> >
> > Thanks.
> >
> > Xueming Li <xuemingl@nvidia.com>
> >
> > ---
> > From b6d04ef865b12f884aaf475adc454184cefae753 Mon Sep 17 00:00:00 2001
> > From: David Marchand <david.marchand@redhat.com>
> > Date: Fri, 17 Nov 2023 14:18:23 +0100
> > Subject: [PATCH] log: add a per line log helper
> > Cc: Xueming Li <xuemingl@nvidia.com>
> >
> > [upstream commit ab550c1d6a0893f00198017a3a0e7cd402a667fd]
> >
> > gcc builtin __builtin_strchr can be used as a static assertion to check
> > whether passed format strings contain a \n.
> > This can be useful to detect double \n in log messages.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > Acked-by: Stephen Hemminger <stephen@networkplumber.org>
> > Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>
> Why do we want this change backported?
It's a dependency of below patch to backport, any suggestion?
- f665790a5d drivers: remove redundant newline from logs
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 9763 bytes --]
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 11:35 ` Xueming Li
@ 2024-11-12 12:47 ` David Marchand
2024-11-12 13:56 ` Xueming Li
0 siblings, 1 reply; 128+ messages in thread
From: David Marchand @ 2024-11-12 12:47 UTC (permalink / raw)
To: Xueming Li; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
On Tue, Nov 12, 2024 at 12:35 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > Why do we want this change backported?
>
> It's a dependency of below patch to backport, any suggestion?
> - f665790a5d drivers: remove redundant newline from logs
In theory, there should be no such dependency.
This f665790a5d change is supposed to remove only extra \n and nothing more.
Could you give more detail?
--
David Marchand
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 12:47 ` David Marchand
@ 2024-11-12 13:56 ` Xueming Li
2024-11-12 14:09 ` David Marchand
0 siblings, 1 reply; 128+ messages in thread
From: Xueming Li @ 2024-11-12 13:56 UTC (permalink / raw)
To: David Marchand; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 852 bytes --]
RTE_LOG_DP_LINE is called.
________________________________
From: David Marchand <david.marchand@redhat.com>
Sent: Tuesday, November 12, 2024 8:47 PM
To: Xueming Li <xuemingl@nvidia.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
Subject: Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
On Tue, Nov 12, 2024 at 12:35 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > Why do we want this change backported?
>
> It's a dependency of below patch to backport, any suggestion?
> - f665790a5d drivers: remove redundant newline from logs
In theory, there should be no such dependency.
This f665790a5d change is supposed to remove only extra \n and nothing more.
Could you give more detail?
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 1897 bytes --]
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 13:56 ` Xueming Li
@ 2024-11-12 14:09 ` David Marchand
2024-11-12 14:11 ` Xueming Li
0 siblings, 1 reply; 128+ messages in thread
From: David Marchand @ 2024-11-12 14:09 UTC (permalink / raw)
To: Xueming Li; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
On Tue, Nov 12, 2024 at 2:57 PM Xueming Li <xuemingl@nvidia.com> wrote:
>
> RTE_LOG_DP_LINE is called.
Oh indeed, that's an error in the commit f665790a5d.
The hunk on drivers/crypto/dpaa_sec/dpaa_sec_log.h should be dropped.
--
David Marchand
^ permalink raw reply [flat|nested] 128+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 14:09 ` David Marchand
@ 2024-11-12 14:11 ` Xueming Li
0 siblings, 0 replies; 128+ messages in thread
From: Xueming Li @ 2024-11-12 14:11 UTC (permalink / raw)
To: David Marchand; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 711 bytes --]
Thanks for update, I'll take it out, then remove this patch.
________________________________
From: David Marchand <david.marchand@redhat.com>
Sent: Tuesday, November 12, 2024 10:09 PM
To: Xueming Li <xuemingl@nvidia.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
Subject: Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
On Tue, Nov 12, 2024 at 2:57 PM Xueming Li <xuemingl@nvidia.com> wrote:
>
> RTE_LOG_DP_LINE is called.
Oh indeed, that's an error in the commit f665790a5d.
The hunk on drivers/crypto/dpaa_sec/dpaa_sec_log.h should be dropped.
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 1711 bytes --]
^ permalink raw reply [flat|nested] 128+ messages in thread
end of thread, other threads:[~2024-11-12 14:12 UTC | newest]
Thread overview: 128+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
2024-11-12 9:02 ` David Marchand
2024-11-12 11:35 ` Xueming Li
2024-11-12 12:47 ` David Marchand
2024-11-12 13:56 ` Xueming Li
2024-11-12 14:09 ` David Marchand
2024-11-12 14:11 ` Xueming Li
2024-11-11 6:26 ` patch 'drivers: remove redundant newline from logs' " Xueming Li
2024-11-11 6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix schedule type' " Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix target event queue' " Xueming Li
2024-11-11 6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
2024-11-11 6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
2024-11-11 6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
2024-11-11 6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
2024-11-11 6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
2024-11-11 6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
2024-11-11 6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
2024-11-11 6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
2024-11-11 6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
2024-11-11 6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
2024-11-11 6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
2024-11-11 6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
2024-11-11 6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
2024-11-11 6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
2024-11-11 6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
2024-11-11 6:27 ` patch 'power: enable CPPC' " Xueming Li
2024-11-11 6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
2024-11-11 6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
2024-11-11 6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
2024-11-11 6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
2024-11-11 6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
2024-11-11 6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
2024-11-11 6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
2024-11-11 6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
2024-11-11 6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
2024-11-11 6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
2024-11-11 6:27 ` patch 'net/ice: " Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: " Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: " Xueming Li
2024-11-11 6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
2024-11-11 6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
2024-11-11 6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
2024-11-11 6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
2024-11-11 6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
2024-11-11 6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
2024-11-11 6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
2024-11-11 6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
2024-11-11 6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
2024-11-11 6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
2024-11-11 6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
2024-11-11 6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
2024-11-11 6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
2024-11-11 6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
2024-11-11 6:28 ` patch '' " Xueming Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).