* patch has been queued to stable release 23.11.3
@ 2024-11-11 6:26 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
` (120 more replies)
0 siblings, 121 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Xueming Li; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0278b12d3741c9375edeced034a111c1e551bfba
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0278b12d3741c9375edeced034a111c1e551bfba Mon Sep 17 00:00:00 2001
From: Xueming Li <xuemingl@nvidia.com>
Date: Mon, 11 Nov 2024 14:23:04 +0800
Subject: [PATCH] 23.11.3-rc1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
Aleksandr Loktionov (1):
net/i40e/base: fix misleading debug logs and comments
Anatoly Burakov (2):
net/i40e/base: fix setting flags in init function
net/i40e/base: add missing X710TL device check
Artur Tyminski (1):
net/i40e/base: fix DDP loading with reserved track ID
Barbara Skobiej (3):
net/ixgbe/base: fix unchecked return value
net/i40e/base: fix unchecked return value
net/i40e/base: fix loop bounds
Bill Xiang (2):
vhost: fix offset while mapping log base address
vdpa: update used flags in used ring relay
Bing Zhao (1):
net/mlx5: workaround list management of Rx queue control
Bruce Richardson (7):
eal/x86: fix 32-bit write combining store
net/iavf: delay VF reset command
net/i40e: fix AVX-512 pointer copy on 32-bit
net/ice: fix AVX-512 pointer copy on 32-bit
net/iavf: fix AVX-512 pointer copy on 32-bit
common/idpf: fix AVX-512 pointer copy on 32-bit
build: remove version check on compiler links function
Chaoyong He (2):
net/nfp: fix link change return value
net/nfp: fix pause frame setting check
Chengwen Feng (4):
examples/eventdev: fix queue crash with generic pipeline
ethdev: verify queue ID in Tx done cleanup
net/hns3: verify reset type from firmware
dmadev: fix potential null pointer access
Dave Ertman (1):
net/ice/base: fix VLAN replay after reset
David Marchand (3):
log: add a per line log helper
drivers: remove redundant newline from logs
net/iavf: preserve MAC address with i40e PF Linux driver
Eryk Rybak (1):
net/i40e/base: fix blinking X722 with X557 PHY
Fabio Pricoco (2):
net/ice/base: fix iteration of TLVs in Preserved Fields Area
net/ice/base: add bounds check
Gagandeep Singh (2):
crypto/dpaa2_sec: fix memory leak
bus/dpaa: fix PFDRs leaks due to FQRNIs
Hemant Agrawal (2):
bus/dpaa: fix VSP for 1G fm1-mac9 and 10
bus/dpaa: fix the fman details status
Hernan Vargas (2):
baseband/acc: fix access to deallocated mem
baseband/acc: fix soft output bypass RM
Jie Hai (2):
net/hns3: remove some basic address dump
net/hns3: fix dump counter of registers
Joshua Washington (5):
net/gve: fix mbuf allocation memory leak for DQ Rx
net/gve: always attempt Rx refill on DQ
net/gve: fix refill logic causing memory corruption
net/gve: add IO memory barriers before reading descriptors
net/gve/base: fix build with Fedora Rawhide
Julien Hascoet (1):
crypto/scheduler: fix session size computation
Jun Wang (1):
net/e1000: fix link status crash in secondary process
Kaiwen Deng (1):
net/iavf: fix crash when link is unstable
Kommula Shiva Shankar (1):
net/virtio-user: reset used index counter
Malcolm Bumgardner (1):
dev: fix callback lookup when unregistering device
Maxime Coquelin (1):
vhost: restrict set max queue pair API to VDUSE
Mihai Brodschi (1):
net/memif: fix buffer overflow in zero copy Rx
Mingjin Ye (1):
bus/vdev: revert fix devargs in secondary process
Niall Meade (1):
ethdev: fix overflow in descriptor count
Nithin Dabilpuram (2):
common/cnxk: fix inline CTX write
common/cnxk: fix CPT HW word size for outbound SA
Oleksandr Nahnybida (1):
pcapng: fix handling of chained mbufs
Paul Greenwalt (1):
net/ice/base: fix link speed for 200G
Pavan Nikhilesh (3):
test/event: fix schedule type
test/event: fix target event queue
common/cnxk: fix IRQ reconfiguration
Praveen Shetty (2):
net/cpfl: add checks for flow action types
net/cpfl: fix parsing protocol ID mask field
Qin Ke (2):
net/nfp: fix type declaration of some variables
net/nfp: fix representor port link status update
Radoslaw Tyl (1):
net/i40e/base: fix repeated register dumps
Rakesh Kudurumalla (6):
net/cnxk: fix Rx timestamp handling for VF
net/cnxk: fix Rx offloads to handle timestamp
event/cnxk: fix Rx timestamp handling
net/cnxk: fix OOP handling for inbound packets
event/cnxk: fix OOP handling in event mode
common/cnxk: fix base log level
Rohit Raj (1):
net/dpaa: fix typecasting channel ID
Shihong Wang (1):
net/nfp: do not set IPv6 flag in transport mode
Shreesh Adiga (1):
net/mana: support rdma-core via pkg-config
Sivaprasad Tummala (1):
power: fix mapped lcore ID
Srikanth Yalavarthi (1):
ml/cnxk: fix handling of TVM model I/O
Stephen Hemminger (22):
bpf: fix free function mismatch if convert fails
baseband/la12xx: fix use after free in modem config
common/qat: fix use after free in device probe
common/idpf: fix use after free in mailbox init
crypto/bcmfs: fix free function mismatch
dma/idxd: fix free function mismatch in device probe
event/cnxk: fix free function mismatch in port config
net/cnxk: fix use after free in mempool create
net/cpfl: fix invalid free in JSON parser
net/e1000: fix use after free in filter flush
net/nfp: fix double free in flow destroy
net/sfc: fix use after free in debug logs
raw/ifpga/base: fix use after free
raw/ifpga: fix free function mismatch in interrupt config
examples/vhost: fix free function mismatch
app/dumpcap: fix handling of jumbo frames
net/tap: avoid memcpy with null argument
app/testpmd: remove unnecessary cast
net/pcap: set live interface as non-blocking
net/ena: revert redefining memcpy
net/tap: restrict maximum number of MP FDs
net/pcap: fix blocking Rx
Sunil Kumar Kori (1):
common/cnxk: fix MAC address change with active VF
Tathagat Priyadarshi (2):
net/gve: fix queue setup and stop
net/gve: fix Tx for chained mbuf
Tejasree Kondoj (1):
examples/ipsec-secgw: fix dequeue count from cryptodev
Thomas Monjalon (1):
net/nfb: fix use after free
Timothy Redaelli (1):
net/ionic: fix build with Fedora Rawhide
Vanshika Shukla (1):
net/dpaa: fix reallocate mbuf handling
Varun Sethi (1):
common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog
Viacheslav Ovsiienko (8):
net/mlx5/hws: fix flex item as tunnel header
net/mlx5: add flex item query for tunnel mode
net/mlx5: fix flex item tunnel mode
net/mlx5: fix number of supported flex parsers
app/testpmd: remove flex item init command leftover
net/mlx5: fix next protocol validation after flex item
net/mlx5: fix non full word sample fields in flex item
net/mlx5: fix flex item header length field translation
Vladimir Medvedkin (3):
fib6: add runtime checks in AVX512 lookup
fib: fix AVX512 lookup
hash: fix thash LFSR initialization
Wathsala Vithanage (1):
power: enable CPPC
Xinying Yu (2):
vdpa/nfp: fix hardware initialization
vdpa/nfp: fix reconfiguration
Xueming Li (1):
23.11.3-rc1
Zerun Fu (1):
net/nfp: notify flower firmware about PF speed
.mailmap | 11 +
app/dumpcap/main.c | 15 +-
app/test-pmd/cmdline.c | 456 +++++++++---------
app/test-pmd/cmdline_flow.c | 12 -
app/test/test_event_dma_adapter.c | 5 +-
app/test/test_eventdev.c | 1 +
app/test/test_pcapng.c | 12 +-
app/test/test_power_cpufreq.c | 21 +-
devtools/checkpatches.sh | 8 +
drivers/baseband/acc/rte_acc100_pmd.c | 58 +--
drivers/baseband/acc/rte_vrb_pmd.c | 75 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 14 +-
drivers/baseband/la12xx/bbdev_la12xx.c | 5 +-
.../baseband/turbo_sw/bbdev_turbo_software.c | 4 +-
drivers/bus/cdx/cdx_vfio.c | 8 +-
drivers/bus/dpaa/base/fman/fman.c | 29 +-
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +-
drivers/bus/dpaa/base/qbman/qman.c | 46 +-
drivers/bus/dpaa/include/fman.h | 3 +-
drivers/bus/fslmc/fslmc_bus.c | 8 +-
drivers/bus/fslmc/fslmc_vfio.c | 10 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 4 +-
drivers/bus/ifpga/ifpga_bus.c | 8 +-
drivers/bus/vdev/vdev.c | 21 +-
drivers/bus/vdev/vdev_params.c | 2 +-
drivers/bus/vmbus/vmbus_common.c | 2 +-
drivers/common/cnxk/roc_dev.c | 18 +-
drivers/common/cnxk/roc_dev_priv.h | 2 +
drivers/common/cnxk/roc_ie_ot.c | 1 +
drivers/common/cnxk/roc_model.c | 2 +-
drivers/common/cnxk/roc_nix_inl.c | 8 +
drivers/common/cnxk/roc_nix_mac.c | 10 -
drivers/common/cnxk/roc_nix_ops.c | 20 +-
drivers/common/cnxk/roc_nix_tm.c | 2 +-
drivers/common/cnxk/roc_nix_tm_mark.c | 2 +-
drivers/common/cnxk/roc_nix_tm_ops.c | 2 +-
drivers/common/cnxk/roc_nix_tm_utils.c | 2 +-
drivers/common/cnxk/roc_platform.c | 2 +-
drivers/common/cnxk/roc_sso.c | 9 +-
drivers/common/cnxk/roc_tim.c | 2 +-
drivers/common/cpt/cpt_ucode.h | 4 +-
drivers/common/dpaax/caamflib/desc/pdcp.h | 10 +
drivers/common/iavf/iavf_prototype.h | 1 +
drivers/common/iavf/version.map | 1 +
drivers/common/idpf/base/idpf_osdep.h | 10 +-
drivers/common/idpf/idpf_common_device.c | 3 +-
drivers/common/idpf/idpf_common_logs.h | 5 +-
drivers/common/idpf/idpf_common_rxtx_avx512.c | 7 +
drivers/common/nfp/nfp_common_ctrl.h | 1 +
drivers/common/octeontx/octeontx_mbox.c | 4 +-
drivers/common/qat/meson.build | 2 +-
drivers/common/qat/qat_device.c | 6 +-
drivers/common/qat/qat_pf2vf.c | 4 +-
drivers/common/qat/qat_qp.c | 2 +-
drivers/compress/isal/isal_compress_pmd.c | 78 +--
drivers/compress/octeontx/otx_zip.h | 12 +-
drivers/compress/octeontx/otx_zip_pmd.c | 14 +-
drivers/compress/zlib/zlib_pmd.c | 26 +-
drivers/compress/zlib/zlib_pmd_ops.c | 4 +-
drivers/crypto/bcmfs/bcmfs_device.c | 4 +-
drivers/crypto/bcmfs/bcmfs_qp.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_session.c | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 32 +-
drivers/crypto/caam_jr/caam_jr_uio.c | 6 +-
drivers/crypto/ccp/ccp_dev.c | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cnxk_se.h | 6 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 43 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 16 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 24 +-
drivers/crypto/dpaa_sec/dpaa_sec_log.h | 2 +-
drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 6 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.c | 4 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 2 +-
drivers/crypto/ipsec_mb/meson.build | 2 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 28 +-
drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 +-
.../crypto/octeontx/otx_cryptodev_hw_access.h | 6 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 42 +-
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 44 +-
drivers/crypto/qat/qat_asym.c | 2 +-
drivers/crypto/qat/qat_sym_session.c | 12 +-
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
drivers/crypto/uadk/uadk_crypto_pmd.c | 8 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/dma/dpaa/dpaa_qdma.c | 40 +-
drivers/dma/dpaa2/dpaa2_qdma.c | 10 +-
drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
drivers/dma/idxd/idxd_common.c | 2 +-
drivers/dma/idxd/idxd_pci.c | 8 +-
drivers/dma/ioat/ioat_dmadev.c | 14 +-
drivers/event/cnxk/cn10k_eventdev.c | 46 ++
drivers/event/cnxk/cn9k_eventdev.c | 31 ++
drivers/event/cnxk/cnxk_eventdev.c | 2 +-
drivers/event/cnxk/cnxk_eventdev_adptr.c | 2 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 2 +-
drivers/event/dlb2/dlb2.c | 220 ++++-----
drivers/event/dlb2/dlb2_xstats.c | 6 +-
drivers/event/dlb2/pf/dlb2_main.c | 52 +-
drivers/event/dlb2/pf/dlb2_pf.c | 20 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 14 +-
drivers/event/octeontx/timvf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 30 +-
drivers/event/opdl/opdl_test.c | 116 ++---
drivers/event/sw/sw_evdev.c | 22 +-
drivers/event/sw/sw_evdev_xstats.c | 4 +-
drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 8 +-
drivers/mempool/octeontx/octeontx_fpavf.c | 22 +-
.../mempool/octeontx/rte_mempool_octeontx.c | 6 +-
drivers/ml/cnxk/cn10k_ml_dev.c | 32 +-
drivers/ml/cnxk/cnxk_ml_ops.c | 32 +-
drivers/ml/cnxk/mvtvm_ml_model.c | 2 +-
drivers/net/atlantic/atl_rxtx.c | 4 +-
drivers/net/atlantic/hw_atl/hw_atl_utils.c | 12 +-
drivers/net/axgbe/axgbe_ethdev.c | 2 +-
drivers/net/bnx2x/bnx2x.c | 8 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 4 +-
drivers/net/bonding/rte_eth_bond_alb.c | 2 +-
drivers/net/bonding/rte_eth_bond_api.c | 4 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 6 +-
drivers/net/cnxk/cn10k_ethdev.c | 18 +-
drivers/net/cnxk/cn10k_ethdev_sec.c | 10 +
drivers/net/cnxk/cn9k_ethdev.c | 17 +-
drivers/net/cnxk/cnxk_ethdev.c | 6 +-
drivers/net/cnxk/cnxk_ethdev.h | 11 +
drivers/net/cnxk/cnxk_ethdev_mcs.c | 14 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 2 +-
drivers/net/cnxk/cnxk_ethdev_sec.c | 2 +-
drivers/net/cnxk/version.map | 1 +
drivers/net/cpfl/cpfl_flow_engine_fxp.c | 11 +
drivers/net/cpfl/cpfl_flow_parser.c | 37 +-
drivers/net/cpfl/cpfl_fxp_rule.c | 8 +-
drivers/net/dpaa/dpaa_ethdev.c | 6 +-
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 16 +-
drivers/net/dpaa2/dpaa2_flow.c | 36 +-
drivers/net/dpaa2/dpaa2_mux.c | 4 +-
drivers/net/dpaa2/dpaa2_recycle.c | 6 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 14 +-
drivers/net/dpaa2/dpaa2_sparser.c | 8 +-
drivers/net/dpaa2/dpaa2_tm.c | 24 +-
drivers/net/e1000/em_ethdev.c | 3 +
drivers/net/e1000/igb_ethdev.c | 6 +-
drivers/net/ena/base/ena_plat_dpdk.h | 10 +-
drivers/net/enetc/enetc_ethdev.c | 4 +-
drivers/net/enetfec/enet_ethdev.c | 4 +-
drivers/net/enetfec/enet_uio.c | 10 +-
drivers/net/enic/enic_ethdev.c | 20 +-
drivers/net/enic/enic_flow.c | 20 +-
drivers/net/enic/enic_vf_representor.c | 16 +-
drivers/net/failsafe/failsafe_args.c | 2 +-
drivers/net/failsafe/failsafe_eal.c | 2 +-
drivers/net/failsafe/failsafe_ether.c | 4 +-
drivers/net/failsafe/failsafe_intr.c | 6 +-
drivers/net/gve/base/gve_adminq.c | 2 +-
drivers/net/gve/base/gve_osdep.h | 48 +-
drivers/net/gve/gve_ethdev.c | 29 +-
drivers/net/gve/gve_ethdev.h | 2 +
drivers/net/gve/gve_rx_dqo.c | 86 ++--
drivers/net/gve/gve_tx_dqo.c | 11 +-
drivers/net/hinic/base/hinic_pmd_eqs.c | 2 +-
drivers/net/hinic/base/hinic_pmd_mbox.c | 6 +-
drivers/net/hinic/base/hinic_pmd_niccfg.c | 8 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 4 +-
drivers/net/hns3/hns3_dump.c | 12 +-
drivers/net/hns3/hns3_intr.c | 18 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
drivers/net/hns3/hns3_regs.c | 18 +-
drivers/net/i40e/base/i40e_adminq.c | 19 +-
drivers/net/i40e/base/i40e_common.c | 42 +-
drivers/net/i40e/base/i40e_devids.h | 3 +-
drivers/net/i40e/base/i40e_diag.c | 12 +-
drivers/net/i40e/base/i40e_nvm.c | 16 +-
drivers/net/i40e/i40e_ethdev.c | 37 +-
drivers/net/i40e/i40e_pf.c | 8 +-
drivers/net/i40e/i40e_rxtx.c | 24 +-
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 7 +
drivers/net/iavf/iavf_ethdev.c | 46 +-
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 7 +
drivers/net/iavf/iavf_vchnl.c | 5 +-
drivers/net/ice/base/ice_adminq_cmd.h | 2 +-
drivers/net/ice/base/ice_controlq.c | 23 +-
drivers/net/ice/base/ice_nvm.c | 36 +-
drivers/net/ice/base/ice_switch.c | 2 -
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_dcf_vf_representor.c | 14 +-
drivers/net/ice/ice_ethdev.c | 44 +-
drivers/net/ice/ice_fdir_filter.c | 2 +-
drivers/net/ice/ice_hash.c | 8 +-
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ice/ice_rxtx_vec_avx512.c | 7 +
drivers/net/ionic/ionic_osdep.h | 30 +-
drivers/net/ipn3ke/ipn3ke_ethdev.c | 4 +-
drivers/net/ipn3ke/ipn3ke_flow.c | 23 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 20 +-
drivers/net/ipn3ke/ipn3ke_tm.c | 10 +-
drivers/net/ixgbe/base/ixgbe_82599.c | 8 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 7 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 24 +-
drivers/net/ixgbe/ixgbe_pf.c | 18 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 8 +-
drivers/net/mana/meson.build | 4 +-
drivers/net/memif/rte_eth_memif.c | 12 +-
drivers/net/mlx4/mlx4.c | 4 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 17 +-
drivers/net/mlx5/mlx5.h | 9 +-
drivers/net/mlx5/mlx5_flow_dv.c | 7 +-
drivers/net/mlx5/mlx5_flow_flex.c | 194 +++++---
drivers/net/mlx5/mlx5_flow_hw.c | 8 +
drivers/net/mlx5/mlx5_rx.h | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfb/nfb_tx.c | 2 +-
.../net/nfp/flower/nfp_flower_representor.c | 6 +-
drivers/net/nfp/nfp_ethdev.c | 18 +-
drivers/net/nfp/nfp_flow.c | 1 -
drivers/net/nfp/nfp_ipsec.c | 15 +-
drivers/net/nfp/nfp_net_common.c | 10 +-
drivers/net/nfp/nfp_net_common.h | 2 +
drivers/net/ngbe/base/ngbe_hw.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 2 +-
drivers/net/ngbe/ngbe_pf.c | 10 +-
drivers/net/octeon_ep/cnxk_ep_tx.c | 2 +-
drivers/net/octeon_ep/cnxk_ep_vf.c | 12 +-
drivers/net/octeon_ep/otx2_ep_vf.c | 18 +-
drivers/net/octeon_ep/otx_ep_common.h | 2 +-
drivers/net/octeon_ep/otx_ep_ethdev.c | 80 +--
drivers/net/octeon_ep/otx_ep_mbox.c | 30 +-
drivers/net/octeon_ep/otx_ep_rxtx.c | 74 +--
drivers/net/octeon_ep/otx_ep_vf.c | 20 +-
drivers/net/octeontx/base/octeontx_pkovf.c | 2 +-
drivers/net/octeontx/octeontx_ethdev.c | 4 +-
drivers/net/pcap/pcap_ethdev.c | 43 +-
drivers/net/pfe/pfe_ethdev.c | 22 +-
drivers/net/pfe/pfe_hif.c | 12 +-
drivers/net/pfe/pfe_hif_lib.c | 2 +-
drivers/net/qede/qede_rxtx.c | 66 +--
drivers/net/sfc/sfc_flow_rss.c | 4 +-
drivers/net/sfc/sfc_mae.c | 23 +-
drivers/net/tap/rte_eth_tap.c | 7 +-
drivers/net/tap/tap_netlink.c | 3 +-
drivers/net/thunderx/nicvf_ethdev.c | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 4 +-
drivers/net/txgbe/txgbe_ipsec.c | 24 +-
drivers/net/txgbe/txgbe_pf.c | 20 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 2 +-
drivers/net/virtio/virtio_user_ethdev.c | 1 +
drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 2 +-
drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c | 14 +-
drivers/raw/ifpga/afu_pmd_n3000.c | 2 +-
drivers/raw/ifpga/base/opae_intel_max10.c | 11 +-
drivers/raw/ifpga/ifpga_rawdev.c | 102 ++--
drivers/regex/cn9k/cn9k_regexdev.c | 2 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 10 +-
drivers/vdpa/nfp/nfp_vdpa.c | 2 +-
drivers/vdpa/nfp/nfp_vdpa_core.c | 25 +-
.../pipeline_worker_generic.c | 12 +-
examples/ipsec-secgw/ipsec-secgw.c | 6 +-
examples/ipsec-secgw/ipsec_process.c | 3 +-
examples/vhost_blk/vhost_blk.c | 2 +-
lib/bpf/bpf_convert.c | 2 +-
lib/dmadev/rte_dmadev.c | 2 +-
lib/eal/common/eal_common_dev.c | 13 +-
lib/eal/x86/include/rte_io.h | 2 +-
lib/ethdev/rte_ethdev.c | 18 +-
lib/fib/dir24_8.c | 4 +-
lib/fib/trie.c | 10 +-
lib/hash/rte_thash.c | 26 +-
lib/log/rte_log.h | 21 +
lib/pcapng/rte_pcapng.c | 12 +-
lib/power/power_acpi_cpufreq.c | 6 +-
lib/power/power_amd_pstate_cpufreq.c | 6 +-
lib/power/power_common.c | 22 +
lib/power/power_common.h | 1 +
lib/power/power_cppc_cpufreq.c | 8 +-
lib/power/power_pstate_cpufreq.c | 6 +-
lib/power/rte_power_pmd_mgmt.c | 11 +-
lib/vhost/rte_vhost.h | 2 +
lib/vhost/socket.c | 11 +
lib/vhost/vdpa.c | 1 +
lib/vhost/vhost_user.c | 2 +-
285 files changed, 2474 insertions(+), 2078 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bus/vdev: revert fix devargs in secondary process' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
` (119 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Mingjin Ye; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1d5ac7180a8b48033600cd7006a7da1a95991c1f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1d5ac7180a8b48033600cd7006a7da1a95991c1f Mon Sep 17 00:00:00 2001
From: Mingjin Ye <mingjinx.ye@intel.com>
Date: Thu, 14 Mar 2024 09:36:28 +0000
Subject: [PATCH] bus/vdev: revert fix devargs in secondary process
Cc: Xueming Li <xuemingl@nvidia.com>
The ASan tool detected a memory leak in the vdev driver
alloc_devargs. The previous commit was that when inserting
a vdev device, the primary process alloc devargs and the
secondary process looks for devargs. This causes the
device to not be created if the secondary process does
not initialise the vdev device. And, this is not the
root cause.
Therefore the following commit was reverted accordingly.
After restoring this commit, the memory leak still exists.
Bugzilla ID: 1450
Fixes: 6666628362c9 ("bus/vdev: fix devargs in secondary process")
Cc: stable@dpdk.org
Signed-off-by: Mingjin Ye <mingjinx.ye@intel.com>
---
drivers/bus/vdev/vdev.c | 21 +--------------------
1 file changed, 1 insertion(+), 20 deletions(-)
diff --git a/drivers/bus/vdev/vdev.c b/drivers/bus/vdev/vdev.c
index dcedd0d4a0..ec7abe7cda 100644
--- a/drivers/bus/vdev/vdev.c
+++ b/drivers/bus/vdev/vdev.c
@@ -263,22 +263,6 @@ alloc_devargs(const char *name, const char *args)
return devargs;
}
-static struct rte_devargs *
-vdev_devargs_lookup(const char *name)
-{
- struct rte_devargs *devargs;
- char dev_name[32];
-
- RTE_EAL_DEVARGS_FOREACH("vdev", devargs) {
- devargs->bus->parse(devargs->name, &dev_name);
- if (strcmp(dev_name, name) == 0) {
- VDEV_LOG(INFO, "devargs matched %s", dev_name);
- return devargs;
- }
- }
- return NULL;
-}
-
static int
insert_vdev(const char *name, const char *args,
struct rte_vdev_device **p_dev,
@@ -291,10 +275,7 @@ insert_vdev(const char *name, const char *args,
if (name == NULL)
return -EINVAL;
- if (rte_eal_process_type() == RTE_PROC_PRIMARY)
- devargs = alloc_devargs(name, args);
- else
- devargs = vdev_devargs_lookup(name);
+ devargs = alloc_devargs(name, args);
if (!devargs)
return -ENOMEM;
--
2.34.1
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-12 9:02 ` David Marchand
2024-11-11 6:26 ` patch 'drivers: remove redundant newline from logs' " Xueming Li
` (118 subsequent siblings)
120 siblings, 1 reply; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: David Marchand; +Cc: xuemingl, Stephen Hemminger, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b6d04ef865b12f884aaf475adc454184cefae753
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b6d04ef865b12f884aaf475adc454184cefae753 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Fri, 17 Nov 2023 14:18:23 +0100
Subject: [PATCH] log: add a per line log helper
Cc: Xueming Li <xuemingl@nvidia.com>
[upstream commit ab550c1d6a0893f00198017a3a0e7cd402a667fd]
gcc builtin __builtin_strchr can be used as a static assertion to check
whether passed format strings contain a \n.
This can be useful to detect double \n in log messages.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
devtools/checkpatches.sh | 8 ++++++++
lib/log/rte_log.h | 21 +++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/devtools/checkpatches.sh b/devtools/checkpatches.sh
index 10b79ca2bc..10d1bf490b 100755
--- a/devtools/checkpatches.sh
+++ b/devtools/checkpatches.sh
@@ -53,6 +53,14 @@ print_usage () {
check_forbidden_additions() { # <patch>
res=0
+ # refrain from new calls to RTE_LOG
+ awk -v FOLDERS="lib" \
+ -v EXPRESSIONS="RTE_LOG\\\(" \
+ -v RET_ON_FAIL=1 \
+ -v MESSAGE='Prefer RTE_LOG_LINE' \
+ -f $(dirname $(readlink -f $0))/check-forbidden-tokens.awk \
+ "$1" || res=1
+
# refrain from new additions of rte_panic() and rte_exit()
# multiple folders and expressions are separated by spaces
awk -v FOLDERS="lib drivers" \
diff --git a/lib/log/rte_log.h b/lib/log/rte_log.h
index f7a8405de9..584cea541e 100644
--- a/lib/log/rte_log.h
+++ b/lib/log/rte_log.h
@@ -17,6 +17,7 @@
extern "C" {
#endif
+#include <assert.h>
#include <stdint.h>
#include <stdio.h>
#include <stdarg.h>
@@ -358,6 +359,26 @@ int rte_vlog(uint32_t level, uint32_t logtype, const char *format, va_list ap)
RTE_LOGTYPE_ ## t, # t ": " __VA_ARGS__) : \
0)
+#if defined(RTE_TOOLCHAIN_GCC) && !defined(PEDANTIC)
+#define RTE_LOG_CHECK_NO_NEWLINE(fmt) \
+ static_assert(!__builtin_strchr(fmt, '\n'), \
+ "This log format string contains a \\n")
+#else
+#define RTE_LOG_CHECK_NO_NEWLINE(...)
+#endif
+
+#define RTE_LOG_LINE(l, t, ...) do { \
+ RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \
+ RTE_LOG(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \
+ RTE_FMT_TAIL(__VA_ARGS__ ,))); \
+} while (0)
+
+#define RTE_LOG_DP_LINE(l, t, ...) do { \
+ RTE_LOG_CHECK_NO_NEWLINE(RTE_FMT_HEAD(__VA_ARGS__ ,)); \
+ RTE_LOG_DP(l, t, RTE_FMT(RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \
+ RTE_FMT_TAIL(__VA_ARGS__ ,))); \
+} while (0)
+
#define RTE_LOG_REGISTER_IMPL(type, name, level) \
int type; \
RTE_INIT(__##type) \
--
2.34.1
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'drivers: remove redundant newline from logs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
` (117 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: David Marchand; +Cc: xuemingl, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5b424bd34d8c972d428d03bc9952528d597e2040
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5b424bd34d8c972d428d03bc9952528d597e2040 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Wed, 13 Dec 2023 20:29:58 +0100
Subject: [PATCH] drivers: remove redundant newline from logs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f665790a5dbad7b645ff46f31d65e977324e7bfc ]
Fix places where two newline characters may be logged.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/baseband/acc/rte_acc100_pmd.c | 22 +-
drivers/baseband/acc/rte_vrb_pmd.c | 26 +--
.../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 14 +-
drivers/baseband/la12xx/bbdev_la12xx.c | 4 +-
.../baseband/turbo_sw/bbdev_turbo_software.c | 4 +-
drivers/bus/cdx/cdx_vfio.c | 8 +-
drivers/bus/dpaa/include/fman.h | 3 +-
drivers/bus/fslmc/fslmc_bus.c | 8 +-
drivers/bus/fslmc/fslmc_vfio.c | 10 +-
drivers/bus/fslmc/portal/dpaa2_hw_dpci.c | 4 +-
drivers/bus/ifpga/ifpga_bus.c | 8 +-
drivers/bus/vdev/vdev_params.c | 2 +-
drivers/bus/vmbus/vmbus_common.c | 2 +-
drivers/common/cnxk/roc_dev.c | 2 +-
drivers/common/cnxk/roc_model.c | 2 +-
drivers/common/cnxk/roc_nix_ops.c | 20 +-
drivers/common/cnxk/roc_nix_tm.c | 2 +-
drivers/common/cnxk/roc_nix_tm_mark.c | 2 +-
drivers/common/cnxk/roc_nix_tm_ops.c | 2 +-
drivers/common/cnxk/roc_nix_tm_utils.c | 2 +-
drivers/common/cnxk/roc_sso.c | 2 +-
drivers/common/cnxk/roc_tim.c | 2 +-
drivers/common/cpt/cpt_ucode.h | 4 +-
drivers/common/idpf/idpf_common_logs.h | 5 +-
drivers/common/octeontx/octeontx_mbox.c | 4 +-
drivers/common/qat/qat_pf2vf.c | 4 +-
drivers/common/qat/qat_qp.c | 2 +-
drivers/compress/isal/isal_compress_pmd.c | 78 +++----
drivers/compress/octeontx/otx_zip.h | 12 +-
drivers/compress/octeontx/otx_zip_pmd.c | 14 +-
drivers/compress/zlib/zlib_pmd.c | 26 +--
drivers/compress/zlib/zlib_pmd_ops.c | 4 +-
drivers/crypto/bcmfs/bcmfs_qp.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 2 +-
drivers/crypto/bcmfs/bcmfs_sym_session.c | 2 +-
drivers/crypto/caam_jr/caam_jr.c | 32 +--
drivers/crypto/caam_jr/caam_jr_uio.c | 6 +-
drivers/crypto/ccp/ccp_dev.c | 2 +-
drivers/crypto/ccp/rte_ccp_pmd.c | 2 +-
drivers/crypto/cnxk/cnxk_se.h | 6 +-
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 42 ++--
drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c | 16 +-
drivers/crypto/dpaa_sec/dpaa_sec.c | 24 +-
drivers/crypto/dpaa_sec/dpaa_sec_log.h | 2 +-
drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c | 6 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.c | 4 +-
drivers/crypto/ipsec_mb/ipsec_mb_private.h | 2 +-
drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 28 +--
drivers/crypto/ipsec_mb/pmd_snow3g.c | 4 +-
.../crypto/octeontx/otx_cryptodev_hw_access.h | 6 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 42 ++--
drivers/crypto/openssl/rte_openssl_pmd_ops.c | 44 ++--
drivers/crypto/qat/qat_asym.c | 2 +-
drivers/crypto/qat/qat_sym_session.c | 12 +-
drivers/crypto/uadk/uadk_crypto_pmd.c | 8 +-
drivers/crypto/virtio/virtio_cryptodev.c | 2 +-
drivers/dma/dpaa/dpaa_qdma.c | 40 ++--
drivers/dma/dpaa2/dpaa2_qdma.c | 10 +-
drivers/dma/hisilicon/hisi_dmadev.c | 6 +-
drivers/dma/idxd/idxd_common.c | 2 +-
drivers/dma/idxd/idxd_pci.c | 6 +-
drivers/dma/ioat/ioat_dmadev.c | 14 +-
drivers/event/cnxk/cnxk_tim_evdev.c | 2 +-
drivers/event/dlb2/dlb2.c | 220 +++++++++---------
drivers/event/dlb2/dlb2_xstats.c | 6 +-
drivers/event/dlb2/pf/dlb2_main.c | 52 ++---
drivers/event/dlb2/pf/dlb2_pf.c | 20 +-
drivers/event/dpaa2/dpaa2_eventdev.c | 14 +-
drivers/event/octeontx/timvf_evdev.c | 2 +-
drivers/event/opdl/opdl_evdev.c | 30 +--
drivers/event/opdl/opdl_test.c | 116 ++++-----
drivers/event/sw/sw_evdev.c | 22 +-
drivers/event/sw/sw_evdev_xstats.c | 4 +-
drivers/mempool/dpaa2/dpaa2_hw_mempool.c | 8 +-
drivers/mempool/octeontx/octeontx_fpavf.c | 22 +-
.../mempool/octeontx/rte_mempool_octeontx.c | 6 +-
drivers/ml/cnxk/cn10k_ml_dev.c | 32 +--
drivers/ml/cnxk/cnxk_ml_ops.c | 20 +-
drivers/net/atlantic/atl_rxtx.c | 4 +-
drivers/net/atlantic/hw_atl/hw_atl_utils.c | 12 +-
drivers/net/axgbe/axgbe_ethdev.c | 2 +-
drivers/net/bnx2x/bnx2x.c | 8 +-
drivers/net/bonding/rte_eth_bond_8023ad.c | 4 +-
drivers/net/bonding/rte_eth_bond_alb.c | 2 +-
drivers/net/bonding/rte_eth_bond_api.c | 4 +-
drivers/net/bonding/rte_eth_bond_pmd.c | 6 +-
drivers/net/cnxk/cnxk_ethdev.c | 4 +-
drivers/net/cnxk/cnxk_ethdev_mcs.c | 14 +-
drivers/net/cnxk/cnxk_ethdev_ops.c | 2 +-
drivers/net/cpfl/cpfl_flow_parser.c | 2 +-
drivers/net/cpfl/cpfl_fxp_rule.c | 8 +-
drivers/net/dpaa2/dpaa2_ethdev.c | 16 +-
drivers/net/dpaa2/dpaa2_flow.c | 36 +--
drivers/net/dpaa2/dpaa2_mux.c | 4 +-
drivers/net/dpaa2/dpaa2_recycle.c | 6 +-
drivers/net/dpaa2/dpaa2_rxtx.c | 14 +-
drivers/net/dpaa2/dpaa2_sparser.c | 8 +-
drivers/net/dpaa2/dpaa2_tm.c | 24 +-
drivers/net/e1000/igb_ethdev.c | 2 +-
drivers/net/enetc/enetc_ethdev.c | 4 +-
drivers/net/enetfec/enet_ethdev.c | 4 +-
drivers/net/enetfec/enet_uio.c | 10 +-
drivers/net/enic/enic_ethdev.c | 20 +-
drivers/net/enic/enic_flow.c | 20 +-
drivers/net/enic/enic_vf_representor.c | 16 +-
drivers/net/failsafe/failsafe_args.c | 2 +-
drivers/net/failsafe/failsafe_eal.c | 2 +-
drivers/net/failsafe/failsafe_ether.c | 4 +-
drivers/net/failsafe/failsafe_intr.c | 6 +-
drivers/net/gve/base/gve_adminq.c | 2 +-
drivers/net/hinic/base/hinic_pmd_eqs.c | 2 +-
drivers/net/hinic/base/hinic_pmd_mbox.c | 6 +-
drivers/net/hinic/base/hinic_pmd_niccfg.c | 8 +-
drivers/net/hinic/hinic_pmd_ethdev.c | 4 +-
drivers/net/hns3/hns3_dump.c | 12 +-
drivers/net/hns3/hns3_intr.c | 12 +-
drivers/net/hns3/hns3_ptp.c | 2 +-
drivers/net/hns3/hns3_regs.c | 4 +-
drivers/net/i40e/i40e_ethdev.c | 37 ++-
drivers/net/i40e/i40e_pf.c | 8 +-
drivers/net/i40e/i40e_rxtx.c | 24 +-
drivers/net/iavf/iavf_ethdev.c | 12 +-
drivers/net/iavf/iavf_rxtx.c | 2 +-
drivers/net/ice/ice_dcf_ethdev.c | 4 +-
drivers/net/ice/ice_dcf_vf_representor.c | 14 +-
drivers/net/ice/ice_ethdev.c | 44 ++--
drivers/net/ice/ice_fdir_filter.c | 2 +-
drivers/net/ice/ice_hash.c | 8 +-
drivers/net/ice/ice_rxtx.c | 2 +-
drivers/net/ipn3ke/ipn3ke_ethdev.c | 4 +-
drivers/net/ipn3ke/ipn3ke_flow.c | 23 +-
drivers/net/ipn3ke/ipn3ke_representor.c | 20 +-
drivers/net/ipn3ke/ipn3ke_tm.c | 10 +-
drivers/net/ixgbe/ixgbe_ethdev.c | 7 +-
drivers/net/ixgbe/ixgbe_ipsec.c | 24 +-
drivers/net/ixgbe/ixgbe_pf.c | 18 +-
drivers/net/ixgbe/rte_pmd_ixgbe.c | 8 +-
drivers/net/memif/rte_eth_memif.c | 2 +-
drivers/net/mlx4/mlx4.c | 4 +-
drivers/net/netvsc/hn_rxtx.c | 4 +-
drivers/net/ngbe/base/ngbe_hw.c | 2 +-
drivers/net/ngbe/ngbe_ethdev.c | 2 +-
drivers/net/ngbe/ngbe_pf.c | 10 +-
drivers/net/octeon_ep/cnxk_ep_tx.c | 2 +-
drivers/net/octeon_ep/cnxk_ep_vf.c | 12 +-
drivers/net/octeon_ep/otx2_ep_vf.c | 18 +-
drivers/net/octeon_ep/otx_ep_common.h | 2 +-
drivers/net/octeon_ep/otx_ep_ethdev.c | 80 +++----
drivers/net/octeon_ep/otx_ep_mbox.c | 30 +--
drivers/net/octeon_ep/otx_ep_rxtx.c | 74 +++---
drivers/net/octeon_ep/otx_ep_vf.c | 20 +-
drivers/net/octeontx/base/octeontx_pkovf.c | 2 +-
drivers/net/octeontx/octeontx_ethdev.c | 4 +-
drivers/net/pcap/pcap_ethdev.c | 4 +-
drivers/net/pfe/pfe_ethdev.c | 22 +-
drivers/net/pfe/pfe_hif.c | 12 +-
drivers/net/pfe/pfe_hif_lib.c | 2 +-
drivers/net/qede/qede_rxtx.c | 66 +++---
drivers/net/thunderx/nicvf_ethdev.c | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 4 +-
drivers/net/txgbe/txgbe_ipsec.c | 24 +-
drivers/net/txgbe/txgbe_pf.c | 20 +-
.../net/virtio/virtio_user/virtio_user_dev.c | 2 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 4 +-
drivers/net/vmxnet3/vmxnet3_rxtx.c | 2 +-
drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c | 14 +-
drivers/raw/ifpga/afu_pmd_n3000.c | 2 +-
drivers/raw/ifpga/ifpga_rawdev.c | 94 ++++----
drivers/regex/cn9k/cn9k_regexdev.c | 2 +-
drivers/vdpa/ifc/ifcvf_vdpa.c | 10 +-
drivers/vdpa/nfp/nfp_vdpa.c | 2 +-
171 files changed, 1194 insertions(+), 1211 deletions(-)
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index 292537e24d..9d028f0f48 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -230,7 +230,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
}
rte_bbdev_log_debug(
- "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u\n",
+ "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u AQ %u %u %u %u Len %u %u %u %u",
(d->pf_device) ? "PF" : "VF",
(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -1229,7 +1229,7 @@ acc100_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, ACC100_HARQ_ALIGN_COMP);
if ((harq_layout[harq_index].offset > 0) && harq_prun) {
- rte_bbdev_log_debug("HARQ IN offset unexpected for now\n");
+ rte_bbdev_log_debug("HARQ IN offset unexpected for now");
fcw->hcin_size0 = harq_layout[harq_index].size0;
fcw->hcin_offset = harq_layout[harq_index].offset;
fcw->hcin_size1 = harq_in_length - harq_layout[harq_index].offset;
@@ -2890,7 +2890,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
uint32_t harq_index;
if (harq_in_length == 0) {
- rte_bbdev_log(ERR, "Loopback of invalid null size\n");
+ rte_bbdev_log(ERR, "Loopback of invalid null size");
return -EINVAL;
}
@@ -2928,7 +2928,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
fcw->hcin_en = 1;
fcw->hcout_en = 1;
- rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d\n",
+ rte_bbdev_log(DEBUG, "Loopback IN %d Index %d offset %d length %d %d",
ddr_mem_in, harq_index,
harq_layout[harq_index].offset, harq_in_length,
harq_dma_length_in);
@@ -2944,7 +2944,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
fcw->hcin_size0 = harq_in_length;
}
harq_layout[harq_index].val = 0;
- rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d\n",
+ rte_bbdev_log(DEBUG, "Loopback FCW Config %d %d %d",
fcw->hcin_size0, fcw->hcin_offset, fcw->hcin_size1);
fcw->hcout_size0 = harq_in_length;
fcw->hcin_decomp_mode = h_comp;
@@ -3691,7 +3691,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
if (i > 0)
same_op = cmp_ldpc_dec_op(&ops[i-1]);
- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
@@ -3808,7 +3808,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
return -1;
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x num %d\n", desc, rsp.val, desc->req.numCBs);
+ rte_bbdev_log_debug("Resp. desc %p: %x num %d", desc, rsp.val, desc->req.numCBs);
/* Dequeue */
op = desc->req.op_addr;
@@ -3885,7 +3885,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
__ATOMIC_RELAXED);
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d\n",
+ rte_bbdev_log_debug("Resp. desc %p: %x descs %d cbs %d",
desc, rsp.val, descs_in_tb, desc->req.numCBs);
op->status |= ((rsp.dma_err) ? (1 << RTE_BBDEV_DRV_ERROR) : 0);
@@ -3981,7 +3981,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
return -1;
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x\n", desc, rsp.val);
+ rte_bbdev_log_debug("Resp. desc %p: %x", desc, rsp.val);
/* Dequeue */
op = desc->req.op_addr;
@@ -4060,7 +4060,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
__ATOMIC_RELAXED);
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d\n",
+ rte_bbdev_log_debug("Resp. desc %p: %x r %d c %d",
desc, rsp.val, cb_idx, cbs_in_tb);
op->status |= ((rsp.input_err) ? (1 << RTE_BBDEV_DATA_ERROR) : 0);
@@ -4797,7 +4797,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
}
if (aram_address > ACC100_WORDS_IN_ARAM_SIZE) {
- rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+ rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
aram_address, ACC100_WORDS_IN_ARAM_SIZE);
return -EINVAL;
}
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 686e086a5c..88e1d03ebf 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -348,7 +348,7 @@ fetch_acc_config(struct rte_bbdev *dev)
}
rte_bbdev_log_debug(
- "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u AQ %u %u %u %u %u %u Len %u %u %u %u %u %u\n",
+ "%s Config LLR SIGN IN/OUT %s %s QG %u %u %u %u %u %u AQ %u %u %u %u %u %u Len %u %u %u %u %u %u",
(d->pf_device) ? "PF" : "VF",
(acc_conf->input_pos_llr_1_bit) ? "POS" : "NEG",
(acc_conf->output_pos_llr_1_bit) ? "POS" : "NEG",
@@ -464,7 +464,7 @@ vrb_dev_interrupt_handler(void *cb_arg)
}
} else {
rte_bbdev_log_debug(
- "VRB VF Interrupt received, Info Ring data: 0x%x\n",
+ "VRB VF Interrupt received, Info Ring data: 0x%x",
ring_data->val);
switch (int_nb) {
case ACC_VF_INT_DMA_DL_DESC_IRQ:
@@ -698,7 +698,7 @@ vrb_intr_enable(struct rte_bbdev *dev)
if (d->device_variant == VRB1_VARIANT) {
/* On VRB1: cannot enable MSI/IR to avoid potential back-pressure corner case. */
- rte_bbdev_log(ERR, "VRB1 (%s) doesn't support any MSI/MSI-X interrupt\n",
+ rte_bbdev_log(ERR, "VRB1 (%s) doesn't support any MSI/MSI-X interrupt",
dev->data->name);
return -ENOTSUP;
}
@@ -800,7 +800,7 @@ vrb_intr_enable(struct rte_bbdev *dev)
return 0;
}
- rte_bbdev_log(ERR, "Device (%s) supports only VFIO MSI/MSI-X interrupts\n",
+ rte_bbdev_log(ERR, "Device (%s) supports only VFIO MSI/MSI-X interrupts",
dev->data->name);
return -ENOTSUP;
}
@@ -1023,7 +1023,7 @@ vrb_queue_setup(struct rte_bbdev *dev, uint16_t queue_id,
d->queue_offset(d->pf_device, q->vf_id, q->qgrp_id, q->aq_id));
rte_bbdev_log_debug(
- "Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p base %p\n",
+ "Setup dev%u q%u: qgrp_id=%u, vf_id=%u, aq_id=%u, aq_depth=%u, mmio_reg_enqueue=%p base %p",
dev->data->dev_id, queue_id, q->qgrp_id, q->vf_id,
q->aq_id, q->aq_depth, q->mmio_reg_enqueue,
d->mmio_base);
@@ -1076,7 +1076,7 @@ vrb_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
);
} else if (op_type == RTE_BBDEV_OP_MLDTS) {
struct rte_bbdev_mldts_op *op_mldts = (struct rte_bbdev_mldts_op *) op;
- rte_bbdev_log(INFO, " Op MLD %d RBs %d NL %d Rp %d %d %x\n",
+ rte_bbdev_log(INFO, " Op MLD %d RBs %d NL %d Rp %d %d %x",
index,
op_mldts->mldts.num_rbs, op_mldts->mldts.num_layers,
op_mldts->mldts.r_rep,
@@ -2492,7 +2492,7 @@ vrb_enqueue_ldpc_dec_one_op_cb(struct acc_queue *q, struct rte_bbdev_dec_op *op,
hq_output = op->ldpc_dec.harq_combined_output.data;
hq_len = op->ldpc_dec.harq_combined_output.length;
if (unlikely(!mbuf_append(hq_output_head, hq_output, hq_len))) {
- rte_bbdev_log(ERR, "HARQ output mbuf issue %d %d\n",
+ rte_bbdev_log(ERR, "HARQ output mbuf issue %d %d",
hq_output->buf_len,
hq_len);
return -1;
@@ -2985,7 +2985,7 @@ vrb_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
break;
}
avail -= 1;
- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
+ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
i, ops[i]->ldpc_dec.op_flags, ops[i]->ldpc_dec.rv_index,
ops[i]->ldpc_dec.iter_max, ops[i]->ldpc_dec.iter_count,
ops[i]->ldpc_dec.basegraph, ops[i]->ldpc_dec.z_c,
@@ -3319,7 +3319,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
return -1;
rsp.val = atom_desc.rsp.val;
- rte_bbdev_log_debug("Resp. desc %p: %x %x %x\n", desc, rsp.val, desc->rsp.add_info_0,
+ rte_bbdev_log_debug("Resp. desc %p: %x %x %x", desc, rsp.val, desc->rsp.add_info_0,
desc->rsp.add_info_1);
/* Dequeue. */
@@ -3440,7 +3440,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
}
if (check_bit(op->ldpc_dec.op_flags, RTE_BBDEV_LDPC_CRC_TYPE_24A_CHECK)) {
- rte_bbdev_log_debug("TB-CRC Check %x\n", tb_crc_check);
+ rte_bbdev_log_debug("TB-CRC Check %x", tb_crc_check);
if (tb_crc_check > 0)
op->status |= 1 << RTE_BBDEV_CRC_ERROR;
}
@@ -3985,7 +3985,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
layer_idx = RTE_MIN(op->mldts.num_layers - VRB2_MLD_MIN_LAYER,
VRB2_MLD_MAX_LAYER - VRB2_MLD_MIN_LAYER);
rrep_idx = RTE_MIN(op->mldts.r_rep, VRB2_MLD_MAX_RREP);
- rte_bbdev_log_debug("RB %d index %d %d max %d\n", op->mldts.num_rbs, layer_idx, rrep_idx,
+ rte_bbdev_log_debug("RB %d index %d %d max %d", op->mldts.num_rbs, layer_idx, rrep_idx,
max_rb[layer_idx][rrep_idx]);
return (op->mldts.num_rbs <= max_rb[layer_idx][rrep_idx]);
@@ -4650,7 +4650,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
}
if (aram_address > VRB1_WORDS_IN_ARAM_SIZE) {
- rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+ rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
aram_address, VRB1_WORDS_IN_ARAM_SIZE);
return -EINVAL;
}
@@ -5020,7 +5020,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
}
}
if (aram_address > VRB2_WORDS_IN_ARAM_SIZE) {
- rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d\n",
+ rte_bbdev_log(ERR, "ARAM Configuration not fitting %d %d",
aram_address, VRB2_WORDS_IN_ARAM_SIZE);
return -EINVAL;
}
diff --git a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
index 6b0644ffc5..d60cd3a5c5 100644
--- a/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
+++ b/drivers/baseband/fpga_5gnr_fec/rte_fpga_5gnr_fec.c
@@ -1498,14 +1498,14 @@ fpga_mutex_acquisition(struct fpga_queue *q)
do {
if (cnt > 0)
usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
- rte_bbdev_log_debug("Acquiring Mutex for %x\n",
+ rte_bbdev_log_debug("Acquiring Mutex for %x",
q->ddr_mutex_uuid);
fpga_reg_write_32(q->d->mmio_base,
FPGA_5GNR_FEC_MUTEX,
mutex_ctrl);
mutex_read = fpga_reg_read_32(q->d->mmio_base,
FPGA_5GNR_FEC_MUTEX);
- rte_bbdev_log_debug("Mutex %x cnt %d owner %x\n",
+ rte_bbdev_log_debug("Mutex %x cnt %d owner %x",
mutex_read, cnt, q->ddr_mutex_uuid);
cnt++;
} while ((mutex_read >> 16) != q->ddr_mutex_uuid);
@@ -1546,7 +1546,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
if (reg_32 < harq_in_length) {
left_length = reg_32;
- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
}
input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
@@ -1609,18 +1609,18 @@ fpga_harq_read_loopback(struct fpga_queue *q,
FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
if (reg < harq_in_length) {
harq_in_length = reg;
- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
}
if (!mbuf_append(harq_output, harq_output, harq_in_length)) {
- rte_bbdev_log(ERR, "HARQ output buffer warning %d %d\n",
+ rte_bbdev_log(ERR, "HARQ output buffer warning %d %d",
harq_output->buf_len -
rte_pktmbuf_headroom(harq_output),
harq_in_length);
harq_in_length = harq_output->buf_len -
rte_pktmbuf_headroom(harq_output);
if (!mbuf_append(harq_output, harq_output, harq_in_length)) {
- rte_bbdev_log(ERR, "HARQ output buffer issue %d %d\n",
+ rte_bbdev_log(ERR, "HARQ output buffer issue %d %d",
harq_output->buf_len, harq_in_length);
return -1;
}
@@ -1642,7 +1642,7 @@ fpga_harq_read_loopback(struct fpga_queue *q,
FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
if (reg == FPGA_DDR_OVERFLOW) {
rte_bbdev_log(ERR,
- "Read address is overflow!\n");
+ "Read address is overflow!");
return -1;
}
}
diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c
index 1a56e73abd..af4b4f1e9a 100644
--- a/drivers/baseband/la12xx/bbdev_la12xx.c
+++ b/drivers/baseband/la12xx/bbdev_la12xx.c
@@ -201,7 +201,7 @@ la12xx_e200_queue_setup(struct rte_bbdev *dev,
q_priv->la12xx_core_id = LA12XX_LDPC_DEC_CORE;
break;
default:
- rte_bbdev_log(ERR, "Unsupported op type\n");
+ rte_bbdev_log(ERR, "Unsupported op type");
return -1;
}
@@ -269,7 +269,7 @@ la12xx_e200_queue_setup(struct rte_bbdev *dev,
ch->feca_blk_id = rte_cpu_to_be_32(priv->num_ldpc_dec_queues++);
break;
default:
- rte_bbdev_log(ERR, "Not supported op type\n");
+ rte_bbdev_log(ERR, "Not supported op type");
return -1;
}
ch->op_type = rte_cpu_to_be_32(q_priv->op_type);
diff --git a/drivers/baseband/turbo_sw/bbdev_turbo_software.c b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
index 8ddc7ff05f..a66dcd8962 100644
--- a/drivers/baseband/turbo_sw/bbdev_turbo_software.c
+++ b/drivers/baseband/turbo_sw/bbdev_turbo_software.c
@@ -269,7 +269,7 @@ info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
dev_info->num_queues[op_cap->type] = num_queue_per_type;
}
- rte_bbdev_log_debug("got device info from %u\n", dev->data->dev_id);
+ rte_bbdev_log_debug("got device info from %u", dev->data->dev_id);
}
/* Release queue */
@@ -1951,7 +1951,7 @@ turbo_sw_bbdev_probe(struct rte_vdev_device *vdev)
parse_turbo_sw_params(&init_params, input_args);
rte_bbdev_log_debug(
- "Initialising %s on NUMA node %d with max queues: %d\n",
+ "Initialising %s on NUMA node %d with max queues: %d",
name, init_params.socket_id, init_params.queues_num);
return turbo_sw_bbdev_create(vdev, &init_params);
diff --git a/drivers/bus/cdx/cdx_vfio.c b/drivers/bus/cdx/cdx_vfio.c
index 79abc3f120..664f267471 100644
--- a/drivers/bus/cdx/cdx_vfio.c
+++ b/drivers/bus/cdx/cdx_vfio.c
@@ -638,7 +638,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
feature->flags |= VFIO_DEVICE_FEATURE_SET;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret) {
- CDX_BUS_ERR("Bus Master configuring not supported for device: %s, error: %d (%s)\n",
+ CDX_BUS_ERR("Bus Master configuring not supported for device: %s, error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
return ret;
@@ -648,7 +648,7 @@ rte_cdx_vfio_bm_enable(struct rte_cdx_device *dev)
vfio_bm_feature->op = VFIO_DEVICE_FEATURE_SET_MASTER;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret < 0)
- CDX_BUS_ERR("BM Enable Error for device: %s, Error: %d (%s)\n",
+ CDX_BUS_ERR("BM Enable Error for device: %s, Error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
@@ -682,7 +682,7 @@ rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
feature->flags |= VFIO_DEVICE_FEATURE_SET;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret) {
- CDX_BUS_ERR("Bus Master configuring not supported for device: %s, Error: %d (%s)\n",
+ CDX_BUS_ERR("Bus Master configuring not supported for device: %s, Error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
return ret;
@@ -692,7 +692,7 @@ rte_cdx_vfio_bm_disable(struct rte_cdx_device *dev)
vfio_bm_feature->op = VFIO_DEVICE_FEATURE_CLEAR_MASTER;
ret = ioctl(vfio_dev_fd, RTE_VFIO_DEVICE_FEATURE, feature);
if (ret < 0)
- CDX_BUS_ERR("BM Disable Error for device: %s, Error: %d (%s)\n",
+ CDX_BUS_ERR("BM Disable Error for device: %s, Error: %d (%s)",
dev->name, errno, strerror(errno));
free(feature);
diff --git a/drivers/bus/dpaa/include/fman.h b/drivers/bus/dpaa/include/fman.h
index 3a6dd555a7..19f6132bba 100644
--- a/drivers/bus/dpaa/include/fman.h
+++ b/drivers/bus/dpaa/include/fman.h
@@ -403,7 +403,8 @@ extern int fman_ccsr_map_fd;
#define FMAN_ERR(rc, fmt, args...) \
do { \
_errno = (rc); \
- DPAA_BUS_LOG(ERR, fmt "(%d)", ##args, errno); \
+ rte_log(RTE_LOG_ERR, dpaa_logtype_bus, "dpaa: " fmt "(%d)\n", \
+ ##args, errno); \
} while (0)
#define FMAN_IP_REV_1 0xC30C4
diff --git a/drivers/bus/fslmc/fslmc_bus.c b/drivers/bus/fslmc/fslmc_bus.c
index 89f0f329c0..adb452fd3e 100644
--- a/drivers/bus/fslmc/fslmc_bus.c
+++ b/drivers/bus/fslmc/fslmc_bus.c
@@ -499,7 +499,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
const struct rte_dpaa2_device *dstart;
struct rte_dpaa2_device *dev;
- DPAA2_BUS_DEBUG("Finding a device named %s\n", (const char *)data);
+ DPAA2_BUS_DEBUG("Finding a device named %s", (const char *)data);
/* find_device is always called with an opaque object which should be
* passed along to the 'cmp' function iterating over all device obj
@@ -514,7 +514,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
}
while (dev != NULL) {
if (cmp(&dev->device, data) == 0) {
- DPAA2_BUS_DEBUG("Found device (%s)\n",
+ DPAA2_BUS_DEBUG("Found device (%s)",
dev->device.name);
return &dev->device;
}
@@ -628,14 +628,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
/* Expectation is that device would be name=device_name */
if (strncmp(str, "name=", 5) != 0) {
- DPAA2_BUS_DEBUG("Invalid device string (%s)\n", str);
+ DPAA2_BUS_DEBUG("Invalid device string (%s)", str);
return NULL;
}
/* Now that name=device_name format is available, split */
dup = strdup(str);
if (dup == NULL) {
- DPAA2_BUS_DEBUG("Dup string (%s) failed!\n", str);
+ DPAA2_BUS_DEBUG("Dup string (%s) failed!", str);
return NULL;
}
dev_name = dup + strlen("name=");
diff --git a/drivers/bus/fslmc/fslmc_vfio.c b/drivers/bus/fslmc/fslmc_vfio.c
index 5966776a85..b90efeb651 100644
--- a/drivers/bus/fslmc/fslmc_vfio.c
+++ b/drivers/bus/fslmc/fslmc_vfio.c
@@ -232,7 +232,7 @@ fslmc_memevent_cb(enum rte_mem_event type, const void *addr, size_t len,
/* iova_addr may be set to RTE_BAD_IOVA */
if (iova_addr == RTE_BAD_IOVA) {
- DPAA2_BUS_DEBUG("Segment has invalid iova, skipping\n");
+ DPAA2_BUS_DEBUG("Segment has invalid iova, skipping");
cur_len += map_len;
continue;
}
@@ -389,7 +389,7 @@ rte_fslmc_vfio_mem_dmamap(uint64_t vaddr, uint64_t iova, uint64_t size)
dma_map.vaddr = vaddr;
dma_map.iova = iova;
- DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64"\n",
+ DPAA2_BUS_DEBUG("VFIOdmamap 0x%"PRIx64":0x%"PRIx64",size 0x%"PRIx64,
(uint64_t)dma_map.vaddr, (uint64_t)dma_map.iova,
(uint64_t)dma_map.size);
ret = ioctl(group->container->fd, VFIO_IOMMU_MAP_DMA,
@@ -480,13 +480,13 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
ret = ioctl(vfio_group_fd, VFIO_GROUP_GET_STATUS, &group_status);
if (ret) {
DPAA2_BUS_ERR(" %s cannot get group status, "
- "error %i (%s)\n", dev_addr,
+ "error %i (%s)", dev_addr,
errno, strerror(errno));
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
} else if (!(group_status.flags & VFIO_GROUP_FLAGS_VIABLE)) {
- DPAA2_BUS_ERR(" %s VFIO group is not viable!\n", dev_addr);
+ DPAA2_BUS_ERR(" %s VFIO group is not viable!", dev_addr);
close(vfio_group_fd);
rte_vfio_clear_group(vfio_group_fd);
return -1;
@@ -503,7 +503,7 @@ fslmc_vfio_setup_device(const char *sysfs_base, const char *dev_addr,
&vfio_container_fd);
if (ret) {
DPAA2_BUS_ERR(" %s cannot add VFIO group to container, "
- "error %i (%s)\n", dev_addr,
+ "error %i (%s)", dev_addr,
errno, strerror(errno));
close(vfio_group_fd);
close(vfio_container_fd);
diff --git a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
index 07256ed7ec..7e858a113f 100644
--- a/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
+++ b/drivers/bus/fslmc/portal/dpaa2_hw_dpci.c
@@ -86,7 +86,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
sizeof(struct queue_storage_info_t),
RTE_CACHE_LINE_SIZE);
if (!rxq->q_storage) {
- DPAA2_BUS_ERR("q_storage allocation failed\n");
+ DPAA2_BUS_ERR("q_storage allocation failed");
ret = -ENOMEM;
goto err;
}
@@ -94,7 +94,7 @@ rte_dpaa2_create_dpci_device(int vdev_fd __rte_unused,
memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t));
ret = dpaa2_alloc_dq_storage(rxq->q_storage);
if (ret) {
- DPAA2_BUS_ERR("dpaa2_alloc_dq_storage failed\n");
+ DPAA2_BUS_ERR("dpaa2_alloc_dq_storage failed");
goto err;
}
}
diff --git a/drivers/bus/ifpga/ifpga_bus.c b/drivers/bus/ifpga/ifpga_bus.c
index ffb0c61214..11b31eee4f 100644
--- a/drivers/bus/ifpga/ifpga_bus.c
+++ b/drivers/bus/ifpga/ifpga_bus.c
@@ -180,7 +180,7 @@ ifpga_scan_one(struct rte_rawdev *rawdev,
rawdev->dev_ops->firmware_load &&
rawdev->dev_ops->firmware_load(rawdev,
&afu_pr_conf)){
- IFPGA_BUS_ERR("firmware load error %d\n", ret);
+ IFPGA_BUS_ERR("firmware load error %d", ret);
goto end;
}
afu_dev->id.uuid.uuid_low = afu_pr_conf.afu_id.uuid.uuid_low;
@@ -316,7 +316,7 @@ ifpga_probe_all_drivers(struct rte_afu_device *afu_dev)
/* Check if a driver is already loaded */
if (rte_dev_is_probed(&afu_dev->device)) {
- IFPGA_BUS_DEBUG("Device %s is already probed\n",
+ IFPGA_BUS_DEBUG("Device %s is already probed",
rte_ifpga_device_name(afu_dev));
return -EEXIST;
}
@@ -353,7 +353,7 @@ ifpga_probe(void)
if (ret == -EEXIST)
continue;
if (ret < 0)
- IFPGA_BUS_ERR("failed to initialize %s device\n",
+ IFPGA_BUS_ERR("failed to initialize %s device",
rte_ifpga_device_name(afu_dev));
}
@@ -408,7 +408,7 @@ ifpga_remove_driver(struct rte_afu_device *afu_dev)
name = rte_ifpga_device_name(afu_dev);
if (afu_dev->driver == NULL) {
- IFPGA_BUS_DEBUG("no driver attach to device %s\n", name);
+ IFPGA_BUS_DEBUG("no driver attach to device %s", name);
return 1;
}
diff --git a/drivers/bus/vdev/vdev_params.c b/drivers/bus/vdev/vdev_params.c
index 51583fe949..68ae09e2e9 100644
--- a/drivers/bus/vdev/vdev_params.c
+++ b/drivers/bus/vdev/vdev_params.c
@@ -53,7 +53,7 @@ rte_vdev_dev_iterate(const void *start,
if (str != NULL) {
kvargs = rte_kvargs_parse(str, vdev_params_keys);
if (kvargs == NULL) {
- VDEV_LOG(ERR, "cannot parse argument list\n");
+ VDEV_LOG(ERR, "cannot parse argument list");
rte_errno = EINVAL;
return NULL;
}
diff --git a/drivers/bus/vmbus/vmbus_common.c b/drivers/bus/vmbus/vmbus_common.c
index b9139c6e6c..8a965d10d9 100644
--- a/drivers/bus/vmbus/vmbus_common.c
+++ b/drivers/bus/vmbus/vmbus_common.c
@@ -108,7 +108,7 @@ vmbus_probe_one_driver(struct rte_vmbus_driver *dr,
/* no initialization when marked as blocked, return without error */
if (dev->device.devargs != NULL &&
dev->device.devargs->policy == RTE_DEV_BLOCKED) {
- VMBUS_LOG(INFO, " Device is blocked, not initializing\n");
+ VMBUS_LOG(INFO, " Device is blocked, not initializing");
return 1;
}
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 14aff233d5..35eb8b7628 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -1493,7 +1493,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
rc = plt_thread_create_control(&dev->sync.pfvf_msg_thread, name,
pf_vf_mbox_thread_main, dev);
if (rc != 0) {
- plt_err("Failed to create thread for VF mbox handling\n");
+ plt_err("Failed to create thread for VF mbox handling");
goto thread_fail;
}
}
diff --git a/drivers/common/cnxk/roc_model.c b/drivers/common/cnxk/roc_model.c
index 6dc2afe7f0..446ab3d2bd 100644
--- a/drivers/common/cnxk/roc_model.c
+++ b/drivers/common/cnxk/roc_model.c
@@ -153,7 +153,7 @@ cn10k_part_pass_get(uint32_t *part, uint32_t *pass)
dir = opendir(SYSFS_PCI_DEVICES);
if (dir == NULL) {
- plt_err("%s(): opendir failed: %s\n", __func__,
+ plt_err("%s(): opendir failed: %s", __func__,
strerror(errno));
return -errno;
}
diff --git a/drivers/common/cnxk/roc_nix_ops.c b/drivers/common/cnxk/roc_nix_ops.c
index 9e66ad1a49..efb0a41d07 100644
--- a/drivers/common/cnxk/roc_nix_ops.c
+++ b/drivers/common/cnxk/roc_nix_ops.c
@@ -220,7 +220,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
}
- plt_nix_dbg("tcpv4 lso fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tcpv4 lso fmt=%u", rsp->lso_format_idx);
/*
* IPv6/TCP LSO
@@ -240,7 +240,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
}
- plt_nix_dbg("tcpv6 lso fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tcpv6 lso fmt=%u", rsp->lso_format_idx);
/*
* IPv4/UDP/TUN HDR/IPv4/TCP LSO
@@ -256,7 +256,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v4v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v4v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv4/UDP/TUN HDR/IPv6/TCP LSO
@@ -272,7 +272,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v4v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v4v6 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/UDP/TUN HDR/IPv4/TCP LSO
@@ -288,7 +288,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v6v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v6v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/UDP/TUN HDR/IPv6/TCP LSO
@@ -304,7 +304,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_udp_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- plt_nix_dbg("udp tun v6v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("udp tun v6v6 fmt=%u", rsp->lso_format_idx);
/*
* IPv4/TUN HDR/IPv4/TCP LSO
@@ -320,7 +320,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V4] = rsp->lso_format_idx;
- plt_nix_dbg("tun v4v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v4v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv4/TUN HDR/IPv6/TCP LSO
@@ -336,7 +336,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V4V6] = rsp->lso_format_idx;
- plt_nix_dbg("tun v4v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v4v6 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/TUN HDR/IPv4/TCP LSO
@@ -352,7 +352,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V4] = rsp->lso_format_idx;
- plt_nix_dbg("tun v6v4 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v6v4 fmt=%u", rsp->lso_format_idx);
/*
* IPv6/TUN HDR/IPv6/TCP LSO
@@ -369,7 +369,7 @@ roc_nix_lso_fmt_setup(struct roc_nix *roc_nix)
goto exit;
nix->lso_tun_idx[ROC_NIX_LSO_TUN_V6V6] = rsp->lso_format_idx;
- plt_nix_dbg("tun v6v6 fmt=%u\n", rsp->lso_format_idx);
+ plt_nix_dbg("tun v6v6 fmt=%u", rsp->lso_format_idx);
rc = 0;
exit:
mbox_put(mbox);
diff --git a/drivers/common/cnxk/roc_nix_tm.c b/drivers/common/cnxk/roc_nix_tm.c
index 9e5e614b3b..92401e04d0 100644
--- a/drivers/common/cnxk/roc_nix_tm.c
+++ b/drivers/common/cnxk/roc_nix_tm.c
@@ -906,7 +906,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
if (rc) {
roc_nix_tm_dump(sq->roc_nix, NULL);
roc_nix_queues_ctx_dump(sq->roc_nix, NULL);
- plt_err("Failed to drain sq %u, rc=%d\n", sq->qid, rc);
+ plt_err("Failed to drain sq %u, rc=%d", sq->qid, rc);
return rc;
}
/* Freed all pending SQEs for this SQ, so disable this node */
diff --git a/drivers/common/cnxk/roc_nix_tm_mark.c b/drivers/common/cnxk/roc_nix_tm_mark.c
index e9a7604e79..092d0851b9 100644
--- a/drivers/common/cnxk/roc_nix_tm_mark.c
+++ b/drivers/common/cnxk/roc_nix_tm_mark.c
@@ -266,7 +266,7 @@ nix_tm_mark_init(struct nix *nix)
}
nix->tm_markfmt[i][j] = rsp->mark_format_idx;
- plt_tm_dbg("Mark type: %u, Mark Color:%u, id:%u\n", i,
+ plt_tm_dbg("Mark type: %u, Mark Color:%u, id:%u", i,
j, nix->tm_markfmt[i][j]);
}
}
diff --git a/drivers/common/cnxk/roc_nix_tm_ops.c b/drivers/common/cnxk/roc_nix_tm_ops.c
index e1cef7a670..c1b91ad92f 100644
--- a/drivers/common/cnxk/roc_nix_tm_ops.c
+++ b/drivers/common/cnxk/roc_nix_tm_ops.c
@@ -503,7 +503,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
/* Wait for sq entries to be flushed */
rc = roc_nix_tm_sq_flush_spin(sq);
if (rc) {
- plt_err("Failed to drain sq, rc=%d\n", rc);
+ plt_err("Failed to drain sq, rc=%d", rc);
goto cleanup;
}
}
diff --git a/drivers/common/cnxk/roc_nix_tm_utils.c b/drivers/common/cnxk/roc_nix_tm_utils.c
index 8e3da95a45..4a09cc2aae 100644
--- a/drivers/common/cnxk/roc_nix_tm_utils.c
+++ b/drivers/common/cnxk/roc_nix_tm_utils.c
@@ -583,7 +583,7 @@ nix_tm_topology_reg_prep(struct nix *nix, struct nix_tm_node *node,
/* Configure TL4 to send to SDP channel instead of CGX/LBK */
if (nix->sdp_link) {
relchan = nix->tx_chan_base & 0xff;
- plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u\n", relchan, schq,
+ plt_tm_dbg("relchan=%u schq=%u tx_chan_cnt=%u", relchan, schq,
nix->tx_chan_cnt);
reg[k] = NIX_AF_TL4X_SDP_LINK_CFG(schq);
regval[k] = BIT_ULL(12);
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index 748d287bad..b02c9c7f38 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -171,7 +171,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
mbox_alloc_msg_free_rsrc_cnt(mbox);
rc = mbox_process_msg(mbox, (void **)&rsrc_cnt);
if (rc) {
- plt_err("Failed to get free resource count\n");
+ plt_err("Failed to get free resource count");
rc = -EIO;
goto exit;
}
diff --git a/drivers/common/cnxk/roc_tim.c b/drivers/common/cnxk/roc_tim.c
index f8607b2852..d39af3c85e 100644
--- a/drivers/common/cnxk/roc_tim.c
+++ b/drivers/common/cnxk/roc_tim.c
@@ -317,7 +317,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
mbox_alloc_msg_free_rsrc_cnt(mbox);
rc = mbox_process_msg(mbox, (void **)&rsrc_cnt);
if (rc) {
- plt_err("Failed to get free resource count\n");
+ plt_err("Failed to get free resource count");
mbox_put(mbox);
return -EIO;
}
diff --git a/drivers/common/cpt/cpt_ucode.h b/drivers/common/cpt/cpt_ucode.h
index b393be4cf6..2e6846312b 100644
--- a/drivers/common/cpt/cpt_ucode.h
+++ b/drivers/common/cpt/cpt_ucode.h
@@ -2589,7 +2589,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform,
sess->cpt_op |= CPT_OP_CIPHER_DECRYPT;
sess->cpt_op |= CPT_OP_AUTH_VERIFY;
} else {
- CPT_LOG_DP_ERR("Unknown aead operation\n");
+ CPT_LOG_DP_ERR("Unknown aead operation");
return -1;
}
switch (aead_form->algo) {
@@ -2658,7 +2658,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform,
ctx->dec_auth = 1;
}
} else {
- CPT_LOG_DP_ERR("Unknown cipher operation\n");
+ CPT_LOG_DP_ERR("Unknown cipher operation");
return -1;
}
diff --git a/drivers/common/idpf/idpf_common_logs.h b/drivers/common/idpf/idpf_common_logs.h
index f6be84ceb5..105450774e 100644
--- a/drivers/common/idpf/idpf_common_logs.h
+++ b/drivers/common/idpf/idpf_common_logs.h
@@ -9,7 +9,7 @@
extern int idpf_common_logtype;
-#define DRV_LOG_RAW(level, ...) \
+#define DRV_LOG(level, ...) \
rte_log(RTE_LOG_ ## level, \
idpf_common_logtype, \
RTE_FMT("%s(): " \
@@ -17,9 +17,6 @@ extern int idpf_common_logtype;
__func__, \
RTE_FMT_TAIL(__VA_ARGS__,)))
-#define DRV_LOG(level, fmt, args...) \
- DRV_LOG_RAW(level, fmt "\n", ## args)
-
#ifdef RTE_LIBRTE_IDPF_DEBUG_RX
#define RX_LOG(level, ...) \
RTE_LOG(level, \
diff --git a/drivers/common/octeontx/octeontx_mbox.c b/drivers/common/octeontx/octeontx_mbox.c
index 4fd3fda721..f98942c79c 100644
--- a/drivers/common/octeontx/octeontx_mbox.c
+++ b/drivers/common/octeontx/octeontx_mbox.c
@@ -264,7 +264,7 @@ octeontx_start_domain(void)
result = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0);
if (result != 0) {
- mbox_log_err("Could not start domain. Err=%d. FuncErr=%d\n",
+ mbox_log_err("Could not start domain. Err=%d. FuncErr=%d",
result, hdr.res_code);
result = -EINVAL;
}
@@ -288,7 +288,7 @@ octeontx_check_mbox_version(struct mbox_intf_ver *app_intf_ver,
sizeof(struct mbox_intf_ver),
&kernel_intf_ver, sizeof(kernel_intf_ver));
if (result != sizeof(kernel_intf_ver)) {
- mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d\n",
+ mbox_log_err("Could not send interface version. Err=%d. FuncErr=%d",
result, hdr.res_code);
result = -EINVAL;
}
diff --git a/drivers/common/qat/qat_pf2vf.c b/drivers/common/qat/qat_pf2vf.c
index 621f12fce2..9b25fdc6a0 100644
--- a/drivers/common/qat/qat_pf2vf.c
+++ b/drivers/common/qat/qat_pf2vf.c
@@ -36,7 +36,7 @@ int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev,
}
if ((pf2vf_msg.msg_type & type_mask) != pf2vf_msg.msg_type) {
- QAT_LOG(ERR, "PF2VF message type 0x%X out of range\n",
+ QAT_LOG(ERR, "PF2VF message type 0x%X out of range",
pf2vf_msg.msg_type);
return -EINVAL;
}
@@ -65,7 +65,7 @@ int qat_pf2vf_exch_msg(struct qat_pci_device *qat_dev,
(++count < ADF_IOV_MSG_ACK_MAX_RETRY));
if (val & ADF_PFVF_INT) {
- QAT_LOG(ERR, "ACK not received from remote\n");
+ QAT_LOG(ERR, "ACK not received from remote");
return -EIO;
}
diff --git a/drivers/common/qat/qat_qp.c b/drivers/common/qat/qat_qp.c
index f95dd33375..21a110d22e 100644
--- a/drivers/common/qat/qat_qp.c
+++ b/drivers/common/qat/qat_qp.c
@@ -267,7 +267,7 @@ qat_queue_create(struct qat_pci_device *qat_dev, struct qat_queue *queue,
if (qat_qp_check_queue_alignment(queue->base_phys_addr,
queue_size_bytes)) {
QAT_LOG(ERR, "Invalid alignment on queue create "
- " 0x%"PRIx64"\n",
+ " 0x%"PRIx64,
queue->base_phys_addr);
ret = -EFAULT;
goto queue_create_err;
diff --git a/drivers/compress/isal/isal_compress_pmd.c b/drivers/compress/isal/isal_compress_pmd.c
index cb23e929ed..0e783243a8 100644
--- a/drivers/compress/isal/isal_compress_pmd.c
+++ b/drivers/compress/isal/isal_compress_pmd.c
@@ -42,10 +42,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
/* Set private xform algorithm */
if (xform->compress.algo != RTE_COMP_ALGO_DEFLATE) {
if (xform->compress.algo == RTE_COMP_ALGO_NULL) {
- ISAL_PMD_LOG(ERR, "By-pass not supported\n");
+ ISAL_PMD_LOG(ERR, "By-pass not supported");
return -ENOTSUP;
}
- ISAL_PMD_LOG(ERR, "Algorithm not supported\n");
+ ISAL_PMD_LOG(ERR, "Algorithm not supported");
return -ENOTSUP;
}
priv_xform->compress.algo = RTE_COMP_ALGO_DEFLATE;
@@ -55,7 +55,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
priv_xform->compress.window_size =
RTE_COMP_ISAL_WINDOW_SIZE;
else {
- ISAL_PMD_LOG(ERR, "Window size not supported\n");
+ ISAL_PMD_LOG(ERR, "Window size not supported");
return -ENOTSUP;
}
@@ -74,7 +74,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
RTE_COMP_HUFFMAN_DYNAMIC;
break;
default:
- ISAL_PMD_LOG(ERR, "Huffman code not supported\n");
+ ISAL_PMD_LOG(ERR, "Huffman code not supported");
return -ENOTSUP;
}
@@ -92,10 +92,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
break;
case(RTE_COMP_CHECKSUM_CRC32_ADLER32):
ISAL_PMD_LOG(ERR, "Combined CRC and ADLER checksum not"
- " supported\n");
+ " supported");
return -ENOTSUP;
default:
- ISAL_PMD_LOG(ERR, "Checksum type not supported\n");
+ ISAL_PMD_LOG(ERR, "Checksum type not supported");
priv_xform->compress.chksum = IGZIP_DEFLATE;
break;
}
@@ -105,21 +105,21 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
*/
if (xform->compress.level < RTE_COMP_LEVEL_PMD_DEFAULT ||
xform->compress.level > RTE_COMP_LEVEL_MAX) {
- ISAL_PMD_LOG(ERR, "Compression level out of range\n");
+ ISAL_PMD_LOG(ERR, "Compression level out of range");
return -EINVAL;
}
/* Check for Compressdev API level 0, No compression
* not supported in ISA-L
*/
else if (xform->compress.level == RTE_COMP_LEVEL_NONE) {
- ISAL_PMD_LOG(ERR, "No Compression not supported\n");
+ ISAL_PMD_LOG(ERR, "No Compression not supported");
return -ENOTSUP;
}
/* If using fixed huffman code, level must be 0 */
else if (priv_xform->compress.deflate.huffman ==
RTE_COMP_HUFFMAN_FIXED) {
ISAL_PMD_LOG(DEBUG, "ISA-L level 0 used due to a"
- " fixed huffman code\n");
+ " fixed huffman code");
priv_xform->compress.level = RTE_COMP_ISAL_LEVEL_ZERO;
priv_xform->level_buffer_size =
ISAL_DEF_LVL0_DEFAULT;
@@ -169,7 +169,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
ISAL_PMD_LOG(DEBUG, "Requested ISA-L level"
" 3 or above; Level 3 optimized"
" for AVX512 & AVX2 only."
- " level changed to 2.\n");
+ " level changed to 2.");
priv_xform->compress.level =
RTE_COMP_ISAL_LEVEL_TWO;
priv_xform->level_buffer_size =
@@ -188,10 +188,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
/* Set private xform algorithm */
if (xform->decompress.algo != RTE_COMP_ALGO_DEFLATE) {
if (xform->decompress.algo == RTE_COMP_ALGO_NULL) {
- ISAL_PMD_LOG(ERR, "By pass not supported\n");
+ ISAL_PMD_LOG(ERR, "By pass not supported");
return -ENOTSUP;
}
- ISAL_PMD_LOG(ERR, "Algorithm not supported\n");
+ ISAL_PMD_LOG(ERR, "Algorithm not supported");
return -ENOTSUP;
}
priv_xform->decompress.algo = RTE_COMP_ALGO_DEFLATE;
@@ -210,10 +210,10 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
break;
case(RTE_COMP_CHECKSUM_CRC32_ADLER32):
ISAL_PMD_LOG(ERR, "Combined CRC and ADLER checksum not"
- " supported\n");
+ " supported");
return -ENOTSUP;
default:
- ISAL_PMD_LOG(ERR, "Checksum type not supported\n");
+ ISAL_PMD_LOG(ERR, "Checksum type not supported");
priv_xform->decompress.chksum = ISAL_DEFLATE;
break;
}
@@ -223,7 +223,7 @@ isal_comp_set_priv_xform_parameters(struct isal_priv_xform *priv_xform,
priv_xform->decompress.window_size =
RTE_COMP_ISAL_WINDOW_SIZE;
else {
- ISAL_PMD_LOG(ERR, "Window size not supported\n");
+ ISAL_PMD_LOG(ERR, "Window size not supported");
return -ENOTSUP;
}
}
@@ -263,7 +263,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
remaining_offset);
if (unlikely(!qp->stream->next_in || !qp->stream->next_out)) {
- ISAL_PMD_LOG(ERR, "Invalid source or destination buffer\n");
+ ISAL_PMD_LOG(ERR, "Invalid source or destination buffer");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -279,7 +279,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
remaining_data = op->src.length - qp->stream->total_in;
if (ret != COMP_OK) {
- ISAL_PMD_LOG(ERR, "Compression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Compression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -294,7 +294,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
RTE_MIN(remaining_data, src->data_len);
} else {
ISAL_PMD_LOG(ERR,
- "Not enough input buffer segments\n");
+ "Not enough input buffer segments");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -309,7 +309,7 @@ chained_mbuf_compression(struct rte_comp_op *op, struct isal_comp_qp *qp)
qp->stream->avail_out = dst->data_len;
} else {
ISAL_PMD_LOG(ERR,
- "Not enough output buffer segments\n");
+ "Not enough output buffer segments");
op->status =
RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return -1;
@@ -378,14 +378,14 @@ chained_mbuf_decompression(struct rte_comp_op *op, struct isal_comp_qp *qp)
if (ret == ISAL_OUT_OVERFLOW) {
ISAL_PMD_LOG(ERR, "Decompression operation ran "
- "out of space, but can be recovered.\n%d bytes "
- "consumed\t%d bytes produced\n",
+ "out of space, but can be recovered.%d bytes "
+ "consumed\t%d bytes produced",
consumed_data, qp->state->total_out);
op->status =
RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE;
return ret;
} else if (ret < 0) {
- ISAL_PMD_LOG(ERR, "Decompression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Decompression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -399,7 +399,7 @@ chained_mbuf_decompression(struct rte_comp_op *op, struct isal_comp_qp *qp)
qp->state->avail_out = dst->data_len;
} else {
ISAL_PMD_LOG(ERR,
- "Not enough output buffer segments\n");
+ "Not enough output buffer segments");
op->status =
RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return -1;
@@ -451,14 +451,14 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
IGZIP_HUFFTABLE_DEFAULT);
if (op->m_src->pkt_len < (op->src.length + op->src.offset)) {
- ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.\n");
+ ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
if (op->dst.offset >= op->m_dst->pkt_len) {
ISAL_PMD_LOG(ERR, "Output mbuf(s) not big enough"
- " for offset provided.\n");
+ " for offset provided.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -483,7 +483,7 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
if (unlikely(!qp->stream->next_in || !qp->stream->next_out)) {
ISAL_PMD_LOG(ERR, "Invalid source or destination"
- " buffers\n");
+ " buffers");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -493,7 +493,7 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
/* Check that output buffer did not run out of space */
if (ret == STATELESS_OVERFLOW) {
- ISAL_PMD_LOG(ERR, "Output buffer not big enough\n");
+ ISAL_PMD_LOG(ERR, "Output buffer not big enough");
op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return ret;
}
@@ -501,13 +501,13 @@ process_isal_deflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
/* Check that input buffer has been fully consumed */
if (qp->stream->avail_in != (uint32_t)0) {
ISAL_PMD_LOG(ERR, "Input buffer could not be read"
- " entirely\n");
+ " entirely");
op->status = RTE_COMP_OP_STATUS_ERROR;
return -1;
}
if (ret != COMP_OK) {
- ISAL_PMD_LOG(ERR, "Compression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Compression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -543,14 +543,14 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
qp->state->crc_flag = priv_xform->decompress.chksum;
if (op->m_src->pkt_len < (op->src.length + op->src.offset)) {
- ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.\n");
+ ISAL_PMD_LOG(ERR, "Input mbuf(s) not big enough.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
if (op->dst.offset >= op->m_dst->pkt_len) {
ISAL_PMD_LOG(ERR, "Output mbuf not big enough for "
- "offset provided.\n");
+ "offset provided.");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -574,7 +574,7 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
if (unlikely(!qp->state->next_in || !qp->state->next_out)) {
ISAL_PMD_LOG(ERR, "Invalid source or destination"
- " buffers\n");
+ " buffers");
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
return -1;
}
@@ -583,7 +583,7 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
ret = isal_inflate_stateless(qp->state);
if (ret == ISAL_OUT_OVERFLOW) {
- ISAL_PMD_LOG(ERR, "Output buffer not big enough\n");
+ ISAL_PMD_LOG(ERR, "Output buffer not big enough");
op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
return ret;
}
@@ -591,13 +591,13 @@ process_isal_inflate(struct rte_comp_op *op, struct isal_comp_qp *qp,
/* Check that input buffer has been fully consumed */
if (qp->state->avail_in != (uint32_t)0) {
ISAL_PMD_LOG(ERR, "Input buffer could not be read"
- " entirely\n");
+ " entirely");
op->status = RTE_COMP_OP_STATUS_ERROR;
return -1;
}
if (ret != ISAL_DECOMP_OK && ret != ISAL_END_INPUT) {
- ISAL_PMD_LOG(ERR, "Decompression operation failed\n");
+ ISAL_PMD_LOG(ERR, "Decompression operation failed");
op->status = RTE_COMP_OP_STATUS_ERROR;
return ret;
}
@@ -622,7 +622,7 @@ process_op(struct isal_comp_qp *qp, struct rte_comp_op *op,
process_isal_inflate(op, qp, priv_xform);
break;
default:
- ISAL_PMD_LOG(ERR, "Operation Not Supported\n");
+ ISAL_PMD_LOG(ERR, "Operation Not Supported");
return -ENOTSUP;
}
return 0;
@@ -641,7 +641,7 @@ isal_comp_pmd_enqueue_burst(void *queue_pair, struct rte_comp_op **ops,
for (i = 0; i < num_enq; i++) {
if (unlikely(ops[i]->op_type != RTE_COMP_OP_STATELESS)) {
ops[i]->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ISAL_PMD_LOG(ERR, "Stateful operation not Supported\n");
+ ISAL_PMD_LOG(ERR, "Stateful operation not Supported");
qp->qp_stats.enqueue_err_count++;
continue;
}
@@ -696,7 +696,7 @@ compdev_isal_create(const char *name, struct rte_vdev_device *vdev,
dev->dequeue_burst = isal_comp_pmd_dequeue_burst;
dev->enqueue_burst = isal_comp_pmd_enqueue_burst;
- ISAL_PMD_LOG(INFO, "\nISA-L library version used: "ISAL_VERSION_STRING);
+ ISAL_PMD_LOG(INFO, "ISA-L library version used: "ISAL_VERSION_STRING);
return 0;
}
@@ -739,7 +739,7 @@ compdev_isal_probe(struct rte_vdev_device *dev)
retval = rte_compressdev_pmd_parse_input_args(&init_params, args);
if (retval) {
ISAL_PMD_LOG(ERR,
- "Failed to parse initialisation arguments[%s]\n", args);
+ "Failed to parse initialisation arguments[%s]", args);
return -EINVAL;
}
diff --git a/drivers/compress/octeontx/otx_zip.h b/drivers/compress/octeontx/otx_zip.h
index 7391360925..d52f937548 100644
--- a/drivers/compress/octeontx/otx_zip.h
+++ b/drivers/compress/octeontx/otx_zip.h
@@ -206,7 +206,7 @@ zipvf_prepare_sgl(struct rte_mbuf *buf, int64_t offset, struct zipvf_sginfo *sg_
break;
}
- ZIP_PMD_LOG(DEBUG, "ZIP SGL buf[%d], len = %d, iova = 0x%"PRIx64"\n",
+ ZIP_PMD_LOG(DEBUG, "ZIP SGL buf[%d], len = %d, iova = 0x%"PRIx64,
sgidx, sginfo[sgidx].sg_ctl.s.length, sginfo[sgidx].sg_addr.s.addr);
++sgidx;
}
@@ -219,7 +219,7 @@ zipvf_prepare_sgl(struct rte_mbuf *buf, int64_t offset, struct zipvf_sginfo *sg_
}
qp->num_sgbuf = ++sgidx;
- ZIP_PMD_LOG(DEBUG, "Tot_buf_len:%d max_segs:%"PRIx64"\n", tot_buf_len,
+ ZIP_PMD_LOG(DEBUG, "Tot_buf_len:%d max_segs:%"PRIx64, tot_buf_len,
qp->num_sgbuf);
return ret;
}
@@ -246,7 +246,7 @@ zipvf_prepare_in_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_com
inst->s.inp_ptr_ctl.s.length = qp->num_sgbuf;
inst->s.inp_ptr_ctl.s.fw = 0;
- ZIP_PMD_LOG(DEBUG, "Gather(input): len(nb_segs):%d, iova: 0x%"PRIx64"\n",
+ ZIP_PMD_LOG(DEBUG, "Gather(input): len(nb_segs):%d, iova: 0x%"PRIx64,
inst->s.inp_ptr_ctl.s.length, inst->s.inp_ptr_addr.s.addr);
return ret;
}
@@ -256,7 +256,7 @@ zipvf_prepare_in_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_com
inst->s.inp_ptr_addr.s.addr = rte_pktmbuf_iova_offset(m_src, offset);
inst->s.inp_ptr_ctl.s.length = inlen;
- ZIP_PMD_LOG(DEBUG, "Direct input - inlen:%d\n", inlen);
+ ZIP_PMD_LOG(DEBUG, "Direct input - inlen:%d", inlen);
return ret;
}
@@ -282,7 +282,7 @@ zipvf_prepare_out_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_co
inst->s.out_ptr_addr.s.addr = rte_mem_virt2iova(qp->s_info);
inst->s.out_ptr_ctl.s.length = qp->num_sgbuf;
- ZIP_PMD_LOG(DEBUG, "Scatter(output): nb_segs:%d, iova:0x%"PRIx64"\n",
+ ZIP_PMD_LOG(DEBUG, "Scatter(output): nb_segs:%d, iova:0x%"PRIx64,
inst->s.out_ptr_ctl.s.length, inst->s.out_ptr_addr.s.addr);
return ret;
}
@@ -296,7 +296,7 @@ zipvf_prepare_out_buf(union zip_inst_s *inst, struct zipvf_qp *qp, struct rte_co
inst->s.out_ptr_ctl.s.length = inst->s.totaloutputlength;
- ZIP_PMD_LOG(DEBUG, "Direct output - outlen:%d\n", inst->s.totaloutputlength);
+ ZIP_PMD_LOG(DEBUG, "Direct output - outlen:%d", inst->s.totaloutputlength);
return ret;
}
diff --git a/drivers/compress/octeontx/otx_zip_pmd.c b/drivers/compress/octeontx/otx_zip_pmd.c
index fd20139da6..c8f456b319 100644
--- a/drivers/compress/octeontx/otx_zip_pmd.c
+++ b/drivers/compress/octeontx/otx_zip_pmd.c
@@ -161,7 +161,7 @@ zip_set_stream_parameters(struct rte_compressdev *dev,
*/
} else {
- ZIP_PMD_ERR("\nxform type not supported");
+ ZIP_PMD_ERR("xform type not supported");
ret = -1;
goto err;
}
@@ -527,7 +527,7 @@ zip_pmd_enqueue_burst(void *queue_pair,
}
qp->enqed = enqd;
- ZIP_PMD_LOG(DEBUG, "ops_enqd[nb_ops:%d]:%d\n", nb_ops, enqd);
+ ZIP_PMD_LOG(DEBUG, "ops_enqd[nb_ops:%d]:%d", nb_ops, enqd);
return enqd;
}
@@ -563,7 +563,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
op->status = RTE_COMP_OP_STATUS_SUCCESS;
} else {
/* FATAL error cannot do anything */
- ZIP_PMD_ERR("operation failed with error code:%d\n",
+ ZIP_PMD_ERR("operation failed with error code:%d",
zresult->s.compcode);
if (zresult->s.compcode == ZIP_COMP_E_DSTOP)
op->status = RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED;
@@ -571,7 +571,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
op->status = RTE_COMP_OP_STATUS_ERROR;
}
- ZIP_PMD_LOG(DEBUG, "written %d\n", zresult->s.totalbyteswritten);
+ ZIP_PMD_LOG(DEBUG, "written %d", zresult->s.totalbyteswritten);
/* Update op stats */
switch (op->status) {
@@ -582,7 +582,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
op->produced = zresult->s.totalbyteswritten;
break;
default:
- ZIP_PMD_ERR("stats not updated for status:%d\n",
+ ZIP_PMD_ERR("stats not updated for status:%d",
op->status);
break;
}
@@ -598,7 +598,7 @@ zip_pmd_dequeue_burst(void *queue_pair,
rte_mempool_put(qp->vf->sg_mp, qp->s_info);
}
- ZIP_PMD_LOG(DEBUG, "ops_deqd[nb_ops:%d]: %d\n", nb_ops, nb_dequeued);
+ ZIP_PMD_LOG(DEBUG, "ops_deqd[nb_ops:%d]: %d", nb_ops, nb_dequeued);
return nb_dequeued;
}
@@ -676,7 +676,7 @@ zip_pci_remove(struct rte_pci_device *pci_dev)
char compressdev_name[RTE_COMPRESSDEV_NAME_MAX_LEN];
if (pci_dev == NULL) {
- ZIP_PMD_ERR(" Invalid PCI Device\n");
+ ZIP_PMD_ERR(" Invalid PCI Device");
return -EINVAL;
}
rte_pci_device_name(&pci_dev->addr, compressdev_name,
diff --git a/drivers/compress/zlib/zlib_pmd.c b/drivers/compress/zlib/zlib_pmd.c
index 98abd41013..92e808e78c 100644
--- a/drivers/compress/zlib/zlib_pmd.c
+++ b/drivers/compress/zlib/zlib_pmd.c
@@ -29,13 +29,13 @@ process_zlib_deflate(struct rte_comp_op *op, z_stream *strm)
break;
default:
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ZLIB_PMD_ERR("Invalid flush value\n");
+ ZLIB_PMD_ERR("Invalid flush value");
return;
}
if (unlikely(!strm)) {
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ZLIB_PMD_ERR("Invalid z_stream\n");
+ ZLIB_PMD_ERR("Invalid z_stream");
return;
}
/* Update z_stream with the inputs provided by application */
@@ -98,7 +98,7 @@ def_end:
op->produced += strm->total_out;
break;
default:
- ZLIB_PMD_ERR("stats not updated for status:%d\n",
+ ZLIB_PMD_ERR("stats not updated for status:%d",
op->status);
}
@@ -114,7 +114,7 @@ process_zlib_inflate(struct rte_comp_op *op, z_stream *strm)
if (unlikely(!strm)) {
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
- ZLIB_PMD_ERR("Invalid z_stream\n");
+ ZLIB_PMD_ERR("Invalid z_stream");
return;
}
strm->next_in = rte_pktmbuf_mtod_offset(mbuf_src, uint8_t *,
@@ -184,7 +184,7 @@ inf_end:
op->produced += strm->total_out;
break;
default:
- ZLIB_PMD_ERR("stats not produced for status:%d\n",
+ ZLIB_PMD_ERR("stats not produced for status:%d",
op->status);
}
@@ -203,7 +203,7 @@ process_zlib_op(struct zlib_qp *qp, struct rte_comp_op *op)
(op->dst.offset > rte_pktmbuf_data_len(op->m_dst))) {
op->status = RTE_COMP_OP_STATUS_INVALID_ARGS;
ZLIB_PMD_ERR("Invalid source or destination buffers or "
- "invalid Operation requested\n");
+ "invalid Operation requested");
} else {
private_xform = (struct zlib_priv_xform *)op->private_xform;
stream = &private_xform->stream;
@@ -238,7 +238,7 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
wbits = -(xform->compress.window_size);
break;
default:
- ZLIB_PMD_ERR("Compression algorithm not supported\n");
+ ZLIB_PMD_ERR("Compression algorithm not supported");
return -1;
}
/** Compression Level */
@@ -260,7 +260,7 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
if (level < RTE_COMP_LEVEL_MIN ||
level > RTE_COMP_LEVEL_MAX) {
ZLIB_PMD_ERR("Compression level %d "
- "not supported\n",
+ "not supported",
level);
return -1;
}
@@ -278,13 +278,13 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
strategy = Z_DEFAULT_STRATEGY;
break;
default:
- ZLIB_PMD_ERR("Compression strategy not supported\n");
+ ZLIB_PMD_ERR("Compression strategy not supported");
return -1;
}
if (deflateInit2(strm, level,
Z_DEFLATED, wbits,
DEF_MEM_LEVEL, strategy) != Z_OK) {
- ZLIB_PMD_ERR("Deflate init failed\n");
+ ZLIB_PMD_ERR("Deflate init failed");
return -1;
}
break;
@@ -298,12 +298,12 @@ zlib_set_stream_parameters(const struct rte_comp_xform *xform,
wbits = -(xform->decompress.window_size);
break;
default:
- ZLIB_PMD_ERR("Compression algorithm not supported\n");
+ ZLIB_PMD_ERR("Compression algorithm not supported");
return -1;
}
if (inflateInit2(strm, wbits) != Z_OK) {
- ZLIB_PMD_ERR("Inflate init failed\n");
+ ZLIB_PMD_ERR("Inflate init failed");
return -1;
}
break;
@@ -395,7 +395,7 @@ zlib_probe(struct rte_vdev_device *vdev)
retval = rte_compressdev_pmd_parse_input_args(&init_params, input_args);
if (retval < 0) {
ZLIB_PMD_LOG(ERR,
- "Failed to parse initialisation arguments[%s]\n",
+ "Failed to parse initialisation arguments[%s]",
input_args);
return -EINVAL;
}
diff --git a/drivers/compress/zlib/zlib_pmd_ops.c b/drivers/compress/zlib/zlib_pmd_ops.c
index 445a3baa67..a530d15119 100644
--- a/drivers/compress/zlib/zlib_pmd_ops.c
+++ b/drivers/compress/zlib/zlib_pmd_ops.c
@@ -48,8 +48,8 @@ zlib_pmd_config(struct rte_compressdev *dev,
NULL, config->socket_id,
0);
if (mp == NULL) {
- ZLIB_PMD_ERR("Cannot create private xform pool on "
- "socket %d\n", config->socket_id);
+ ZLIB_PMD_ERR("Cannot create private xform pool on socket %d",
+ config->socket_id);
return -ENOMEM;
}
internals->mp = mp;
diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c
index d1ede5e990..59e39a6c14 100644
--- a/drivers/crypto/bcmfs/bcmfs_qp.c
+++ b/drivers/crypto/bcmfs/bcmfs_qp.c
@@ -142,7 +142,7 @@ bcmfs_queue_create(struct bcmfs_queue *queue,
if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) {
BCMFS_LOG(ERR, "Invalid alignment on queue create "
- " 0x%" PRIx64 "\n",
+ " 0x%" PRIx64,
queue->base_phys_addr);
ret = -EFAULT;
goto queue_create_err;
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
index 78272d616c..d3b1e25d57 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c
@@ -217,7 +217,7 @@ bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id,
bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr;
cdev->data->queue_pairs[qp_id] = qp;
- BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id);
+ BCMFS_LOG(NOTICE, "queue %d setup done", qp_id);
return 0;
}
diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c
index 40813d1fe5..64bd4a317a 100644
--- a/drivers/crypto/bcmfs/bcmfs_sym_session.c
+++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c
@@ -192,7 +192,7 @@ crypto_set_session_parameters(struct bcmfs_sym_session *sess,
rc = -EINVAL;
break;
default:
- BCMFS_DP_LOG(ERR, "Invalid chain order\n");
+ BCMFS_DP_LOG(ERR, "Invalid chain order");
rc = -EINVAL;
break;
}
diff --git a/drivers/crypto/caam_jr/caam_jr.c b/drivers/crypto/caam_jr/caam_jr.c
index b55258689b..1713600db7 100644
--- a/drivers/crypto/caam_jr/caam_jr.c
+++ b/drivers/crypto/caam_jr/caam_jr.c
@@ -309,7 +309,7 @@ caam_jr_prep_cdb(struct caam_jr_session *ses)
cdb = caam_jr_dma_mem_alloc(L1_CACHE_BYTES, sizeof(struct sec_cdb));
if (!cdb) {
- CAAM_JR_ERR("failed to allocate memory for cdb\n");
+ CAAM_JR_ERR("failed to allocate memory for cdb");
return -1;
}
@@ -606,7 +606,7 @@ hw_poll_job_ring(struct sec_job_ring_t *job_ring,
/*TODO for multiple ops, packets*/
ctx = container_of(current_desc, struct caam_jr_op_ctx, jobdes);
if (unlikely(sec_error_code)) {
- CAAM_JR_ERR("desc at cidx %d generated error 0x%x\n",
+ CAAM_JR_ERR("desc at cidx %d generated error 0x%x",
job_ring->cidx, sec_error_code);
hw_handle_job_ring_error(job_ring, sec_error_code);
//todo improve with exact errors
@@ -1368,7 +1368,7 @@ caam_jr_enqueue_op(struct rte_crypto_op *op, struct caam_jr_qp *qp)
}
if (unlikely(!ses->qp || ses->qp != qp)) {
- CAAM_JR_DP_DEBUG("Old:sess->qp=%p New qp = %p\n", ses->qp, qp);
+ CAAM_JR_DP_DEBUG("Old:sess->qp=%p New qp = %p", ses->qp, qp);
ses->qp = qp;
caam_jr_prep_cdb(ses);
}
@@ -1554,7 +1554,7 @@ caam_jr_cipher_init(struct rte_cryptodev *dev __rte_unused,
session->cipher_key.data = rte_zmalloc(NULL, xform->cipher.key.length,
RTE_CACHE_LINE_SIZE);
if (session->cipher_key.data == NULL && xform->cipher.key.length > 0) {
- CAAM_JR_ERR("No Memory for cipher key\n");
+ CAAM_JR_ERR("No Memory for cipher key");
return -ENOMEM;
}
session->cipher_key.length = xform->cipher.key.length;
@@ -1576,7 +1576,7 @@ caam_jr_auth_init(struct rte_cryptodev *dev __rte_unused,
session->auth_key.data = rte_zmalloc(NULL, xform->auth.key.length,
RTE_CACHE_LINE_SIZE);
if (session->auth_key.data == NULL && xform->auth.key.length > 0) {
- CAAM_JR_ERR("No Memory for auth key\n");
+ CAAM_JR_ERR("No Memory for auth key");
return -ENOMEM;
}
session->auth_key.length = xform->auth.key.length;
@@ -1602,7 +1602,7 @@ caam_jr_aead_init(struct rte_cryptodev *dev __rte_unused,
session->aead_key.data = rte_zmalloc(NULL, xform->aead.key.length,
RTE_CACHE_LINE_SIZE);
if (session->aead_key.data == NULL && xform->aead.key.length > 0) {
- CAAM_JR_ERR("No Memory for aead key\n");
+ CAAM_JR_ERR("No Memory for aead key");
return -ENOMEM;
}
session->aead_key.length = xform->aead.key.length;
@@ -1755,7 +1755,7 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
RTE_CACHE_LINE_SIZE);
if (session->cipher_key.data == NULL &&
cipher_xform->key.length > 0) {
- CAAM_JR_ERR("No Memory for cipher key\n");
+ CAAM_JR_ERR("No Memory for cipher key");
return -ENOMEM;
}
@@ -1765,7 +1765,7 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
RTE_CACHE_LINE_SIZE);
if (session->auth_key.data == NULL &&
auth_xform->key.length > 0) {
- CAAM_JR_ERR("No Memory for auth key\n");
+ CAAM_JR_ERR("No Memory for auth key");
rte_free(session->cipher_key.data);
return -ENOMEM;
}
@@ -1810,11 +1810,11 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
case RTE_CRYPTO_AUTH_KASUMI_F9:
case RTE_CRYPTO_AUTH_AES_CBC_MAC:
case RTE_CRYPTO_AUTH_ZUC_EIA3:
- CAAM_JR_ERR("Crypto: Unsupported auth alg %u\n",
+ CAAM_JR_ERR("Crypto: Unsupported auth alg %u",
auth_xform->algo);
goto out;
default:
- CAAM_JR_ERR("Crypto: Undefined Auth specified %u\n",
+ CAAM_JR_ERR("Crypto: Undefined Auth specified %u",
auth_xform->algo);
goto out;
}
@@ -1834,11 +1834,11 @@ caam_jr_set_ipsec_session(__rte_unused struct rte_cryptodev *dev,
case RTE_CRYPTO_CIPHER_3DES_ECB:
case RTE_CRYPTO_CIPHER_AES_ECB:
case RTE_CRYPTO_CIPHER_KASUMI_F8:
- CAAM_JR_ERR("Crypto: Unsupported Cipher alg %u\n",
+ CAAM_JR_ERR("Crypto: Unsupported Cipher alg %u",
cipher_xform->algo);
goto out;
default:
- CAAM_JR_ERR("Crypto: Undefined Cipher specified %u\n",
+ CAAM_JR_ERR("Crypto: Undefined Cipher specified %u",
cipher_xform->algo);
goto out;
}
@@ -1962,7 +1962,7 @@ caam_jr_dev_configure(struct rte_cryptodev *dev,
NULL, NULL, NULL, NULL,
SOCKET_ID_ANY, 0);
if (!internals->ctx_pool) {
- CAAM_JR_ERR("%s create failed\n", str);
+ CAAM_JR_ERR("%s create failed", str);
return -ENOMEM;
}
} else
@@ -2180,7 +2180,7 @@ init_job_ring(void *reg_base_addr, int irq_id)
}
}
if (job_ring == NULL) {
- CAAM_JR_ERR("No free job ring\n");
+ CAAM_JR_ERR("No free job ring");
return NULL;
}
@@ -2301,7 +2301,7 @@ caam_jr_dev_init(const char *name,
job_ring->uio_fd);
if (!dev->data->dev_private) {
- CAAM_JR_ERR("Ring memory allocation failed\n");
+ CAAM_JR_ERR("Ring memory allocation failed");
goto cleanup2;
}
@@ -2334,7 +2334,7 @@ caam_jr_dev_init(const char *name,
security_instance = rte_malloc("caam_jr",
sizeof(struct rte_security_ctx), 0);
if (security_instance == NULL) {
- CAAM_JR_ERR("memory allocation failed\n");
+ CAAM_JR_ERR("memory allocation failed");
//todo error handling.
goto cleanup2;
}
diff --git a/drivers/crypto/caam_jr/caam_jr_uio.c b/drivers/crypto/caam_jr/caam_jr_uio.c
index 583ba3b523..acb40bdf77 100644
--- a/drivers/crypto/caam_jr/caam_jr_uio.c
+++ b/drivers/crypto/caam_jr/caam_jr_uio.c
@@ -338,7 +338,7 @@ free_job_ring(int uio_fd)
}
if (job_ring == NULL) {
- CAAM_JR_ERR("JR not available for fd = %x\n", uio_fd);
+ CAAM_JR_ERR("JR not available for fd = %x", uio_fd);
return;
}
@@ -378,7 +378,7 @@ uio_job_ring *config_job_ring(void)
}
if (job_ring == NULL) {
- CAAM_JR_ERR("No free job ring\n");
+ CAAM_JR_ERR("No free job ring");
return NULL;
}
@@ -441,7 +441,7 @@ sec_configure(void)
dir->d_name, "name", uio_name);
CAAM_JR_INFO("sec device uio name: %s", uio_name);
if (ret != 0) {
- CAAM_JR_ERR("file_read_first_line failed\n");
+ CAAM_JR_ERR("file_read_first_line failed");
closedir(d);
return -1;
}
diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c
index b7ca3af5a4..6d42b92d8b 100644
--- a/drivers/crypto/ccp/ccp_dev.c
+++ b/drivers/crypto/ccp/ccp_dev.c
@@ -362,7 +362,7 @@ ccp_find_lsb_regions(struct ccp_queue *cmd_q, uint64_t status)
if (ccp_get_bit(&cmd_q->lsbmask, j))
weight++;
- CCP_LOG_DBG("Queue %d can access %d LSB regions of mask %lu\n",
+ CCP_LOG_DBG("Queue %d can access %d LSB regions of mask %lu",
(int)cmd_q->id, weight, cmd_q->lsbmask);
return weight ? 0 : -EINVAL;
diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c
index a5271d7227..c92fdb446d 100644
--- a/drivers/crypto/ccp/rte_ccp_pmd.c
+++ b/drivers/crypto/ccp/rte_ccp_pmd.c
@@ -228,7 +228,7 @@ cryptodev_ccp_create(const char *name,
}
cryptodev_cnt++;
- CCP_LOG_DBG("CCP : Crypto device count = %d\n", cryptodev_cnt);
+ CCP_LOG_DBG("CCP : Crypto device count = %d", cryptodev_cnt);
dev->device = &pci_dev->device;
dev->device->driver = &pci_drv->driver;
dev->driver_id = ccp_cryptodev_driver_id;
diff --git a/drivers/crypto/cnxk/cnxk_se.h b/drivers/crypto/cnxk/cnxk_se.h
index c2a807fa94..cf163e0208 100644
--- a/drivers/crypto/cnxk/cnxk_se.h
+++ b/drivers/crypto/cnxk/cnxk_se.h
@@ -1952,7 +1952,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
sess->cpt_op |= ROC_SE_OP_AUTH_VERIFY;
} else {
- plt_dp_err("Unknown aead operation\n");
+ plt_dp_err("Unknown aead operation");
return -1;
}
switch (aead_form->algo) {
@@ -2036,7 +2036,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
sess->cpt_op |= ROC_SE_OP_CIPHER_DECRYPT;
sess->roc_se_ctx.template_w4.s.opcode_minor = ROC_SE_FC_MINOR_OP_DECRYPT;
} else {
- plt_dp_err("Unknown cipher operation\n");
+ plt_dp_err("Unknown cipher operation");
return -1;
}
@@ -2113,7 +2113,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
ROC_SE_FC_MINOR_OP_HMAC_FIRST;
}
} else {
- plt_dp_err("Unknown cipher operation\n");
+ plt_dp_err("Unknown cipher operation");
return -1;
}
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index 6ae356ace0..b65bea3b3f 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -1146,7 +1146,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d"
- " data_off: 0x%x\n",
+ " data_off: 0x%x",
data_offset,
data_len,
sess->iv.length,
@@ -1172,7 +1172,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SET_FLE_FIN(sge);
DPAA2_SEC_DP_DEBUG(
- "CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+ "CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d",
flc, fle, fle->addr_hi, fle->addr_lo,
fle->length);
@@ -1212,7 +1212,7 @@ build_cipher_sg_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER SG: fdaddr =%" PRIx64 " bpid =%d meta =%d"
- " off =%d, len =%d\n",
+ " off =%d, len =%d",
DPAA2_GET_FD_ADDR(fd),
DPAA2_GET_FD_BPID(fd),
rte_dpaa2_bpid_info[bpid].meta_data_size,
@@ -1292,7 +1292,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER: cipher_off: 0x%x/length %d, ivlen=%d,"
- " data_off: 0x%x\n",
+ " data_off: 0x%x",
data_offset,
data_len,
sess->iv.length,
@@ -1303,7 +1303,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
fle->length = data_len + sess->iv.length;
DPAA2_SEC_DP_DEBUG(
- "CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d\n",
+ "CIPHER: 1 - flc = %p, fle = %p FLEaddr = %x-%x, length %d",
flc, fle, fle->addr_hi, fle->addr_lo,
fle->length);
@@ -1326,7 +1326,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op,
DPAA2_SEC_DP_DEBUG(
"CIPHER: fdaddr =%" PRIx64 " bpid =%d meta =%d"
- " off =%d, len =%d\n",
+ " off =%d, len =%d",
DPAA2_GET_FD_ADDR(fd),
DPAA2_GET_FD_BPID(fd),
rte_dpaa2_bpid_info[bpid].meta_data_size,
@@ -1348,12 +1348,12 @@ build_sec_fd(struct rte_crypto_op *op,
} else if (op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) {
sess = SECURITY_GET_SESS_PRIV(op->sym->session);
} else {
- DPAA2_SEC_DP_ERR("Session type invalid\n");
+ DPAA2_SEC_DP_ERR("Session type invalid");
return -ENOTSUP;
}
if (!sess) {
- DPAA2_SEC_DP_ERR("Session not available\n");
+ DPAA2_SEC_DP_ERR("Session not available");
return -EINVAL;
}
@@ -1446,7 +1446,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1475,7 +1475,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
bpid = mempool_to_bpid(mb_pool);
ret = build_sec_fd(*ops, &fd_arr[loop], bpid, dpaa2_qp);
if (ret) {
- DPAA2_SEC_DP_DEBUG("FD build failed\n");
+ DPAA2_SEC_DP_DEBUG("FD build failed");
goto skip_tx;
}
ops++;
@@ -1493,7 +1493,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
num_tx += loop;
nb_ops -= loop;
- DPAA2_SEC_DP_DEBUG("Enqueue fail\n");
+ DPAA2_SEC_DP_DEBUG("Enqueue fail");
/* freeing the fle buffers */
while (loop < frames_to_send) {
free_fle(&fd_arr[loop],
@@ -1569,7 +1569,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
- DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+ DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x",
fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
/* we are using the first FLE entry to store Mbuf.
@@ -1602,7 +1602,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
}
DPAA2_SEC_DP_DEBUG("mbuf %p BMAN buf addr %p,"
- " fdaddr =%" PRIx64 " bpid =%d meta =%d off =%d, len =%d\n",
+ " fdaddr =%" PRIx64 " bpid =%d meta =%d off =%d, len =%d",
(void *)dst,
dst->buf_addr,
DPAA2_GET_FD_ADDR(fd),
@@ -1824,7 +1824,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
bpid = mempool_to_bpid(mb_pool);
ret = build_sec_fd(*ops, &fd_arr[loop], bpid, dpaa2_qp);
if (ret) {
- DPAA2_SEC_DP_DEBUG("FD build failed\n");
+ DPAA2_SEC_DP_DEBUG("FD build failed");
goto skip_tx;
}
ops++;
@@ -1841,7 +1841,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) {
num_tx += loop;
nb_ops -= loop;
- DPAA2_SEC_DP_DEBUG("Enqueue fail\n");
+ DPAA2_SEC_DP_DEBUG("Enqueue fail");
/* freeing the fle buffers */
while (loop < frames_to_send) {
free_fle(&fd_arr[loop],
@@ -1884,7 +1884,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1937,7 +1937,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
status = (uint8_t)qbman_result_DQ_flags(dq_storage);
if (unlikely(
(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
- DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+ DPAA2_SEC_DP_DEBUG("No frame is delivered");
continue;
}
}
@@ -1948,7 +1948,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
if (unlikely(fd->simple.frc)) {
/* TODO Parse SEC errors */
if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_NO_DUMP) {
- DPAA2_SEC_DP_ERR("SEC returned Error - %x\n",
+ DPAA2_SEC_DP_ERR("SEC returned Error - %x",
fd->simple.frc);
if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_ERR_DUMP)
dpaa2_sec_dump(ops[num_rx]);
@@ -1966,7 +1966,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
dpaa2_qp->rx_vq.rx_pkts += num_rx;
- DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64 "\n", num_rx,
+ DPAA2_SEC_DP_DEBUG("SEC RX pkts %d err pkts %" PRIu64, num_rx,
dpaa2_qp->rx_vq.err_pkts);
/*Return the total number of packets received to DPAA2 app*/
return num_rx;
@@ -2555,7 +2555,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
#ifdef CAAM_DESC_DEBUG
int i;
for (i = 0; i < bufsize; i++)
- DPAA2_SEC_DEBUG("DESC[%d]:0x%x\n",
+ DPAA2_SEC_DEBUG("DESC[%d]:0x%x",
i, priv->flc_desc[0].desc[i]);
#endif
return ret;
@@ -4275,7 +4275,7 @@ check_devargs_handler(const char *key, const char *value,
if (dpaa2_sec_dp_dump > DPAA2_SEC_DP_FULL_DUMP) {
DPAA2_SEC_WARN("WARN: DPAA2_SEC_DP_DUMP_LEVEL is not "
"supported, changing to FULL error"
- " prints\n");
+ " prints");
dpaa2_sec_dp_dump = DPAA2_SEC_DP_FULL_DUMP;
}
} else
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
index 4754b9d6f8..883584a6e2 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_raw_dp.c
@@ -605,7 +605,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
flc = &priv->flc_desc[0].flc;
DPAA2_SEC_DP_DEBUG(
- "RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d\n",
+ "RAW CIPHER SG: cipher_off: 0x%x/length %d, ivlen=%d",
data_offset,
data_len,
sess->iv.length);
@@ -642,7 +642,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
DPAA2_SET_FLE_FIN(sge);
DPAA2_SEC_DP_DEBUG(
- "RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d\n",
+ "RAW CIPHER SG: 1 - flc = %p, fle = %p FLEaddr = %x-%x, len %d",
flc, fle, fle->addr_hi, fle->addr_lo,
fle->length);
@@ -678,7 +678,7 @@ build_raw_dp_cipher_fd(uint8_t *drv_ctx,
DPAA2_SET_FD_FLC(fd, DPAA2_VADDR_TO_IOVA(flc));
DPAA2_SEC_DP_DEBUG(
- "RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d\n",
+ "RAW CIPHER SG: fdaddr =%" PRIx64 " off =%d, len =%d",
DPAA2_GET_FD_ADDR(fd),
DPAA2_GET_FD_OFFSET(fd),
DPAA2_GET_FD_LEN(fd));
@@ -721,7 +721,7 @@ dpaa2_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -811,7 +811,7 @@ sec_fd_to_userdata(const struct qbman_fd *fd)
void *userdata;
fle = (struct qbman_fle *)DPAA2_IOVA_TO_VADDR(DPAA2_GET_FD_ADDR(fd));
- DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x\n",
+ DPAA2_SEC_DP_DEBUG("FLE addr = %x - %x, offset = %x",
fle->addr_hi, fle->addr_lo, fle->fin_bpid_offset);
userdata = (struct rte_crypto_op *)DPAA2_GET_FLE_ADDR((fle - 1));
/* free the fle memory */
@@ -847,7 +847,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_SEC_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -900,7 +900,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
status = (uint8_t)qbman_result_DQ_flags(dq_storage);
if (unlikely(
(status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
- DPAA2_SEC_DP_DEBUG("No frame is delivered\n");
+ DPAA2_SEC_DP_DEBUG("No frame is delivered");
continue;
}
}
@@ -929,7 +929,7 @@ dpaa2_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
*dequeue_status = 1;
*n_success = num_rx;
- DPAA2_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+ DPAA2_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
/*Return the total number of packets received to DPAA2 app*/
return num_rx;
}
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c
index 906ea39047..131cd90c94 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec.c
@@ -102,7 +102,7 @@ ern_sec_fq_handler(struct qman_portal *qm __rte_unused,
struct qman_fq *fq,
const struct qm_mr_entry *msg)
{
- DPAA_SEC_DP_ERR("sec fq %d error, RC = %x, seqnum = %x\n",
+ DPAA_SEC_DP_ERR("sec fq %d error, RC = %x, seqnum = %x",
fq->fqid, msg->ern.rc, msg->ern.seqnum);
}
@@ -849,7 +849,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
op->status = RTE_CRYPTO_OP_STATUS_SUCCESS;
} else {
if (dpaa_sec_dp_dump > DPAA_SEC_DP_NO_DUMP) {
- DPAA_SEC_DP_WARN("SEC return err:0x%x\n",
+ DPAA_SEC_DP_WARN("SEC return err:0x%x",
ctx->fd_status);
if (dpaa_sec_dp_dump > DPAA_SEC_DP_ERR_DUMP)
dpaa_sec_dump(ctx, qp);
@@ -1944,7 +1944,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
} else if (unlikely(ses->qp[rte_lcore_id() %
MAX_DPAA_CORES] != qp)) {
DPAA_SEC_DP_ERR("Old:sess->qp = %p"
- " New qp = %p\n",
+ " New qp = %p",
ses->qp[rte_lcore_id() %
MAX_DPAA_CORES], qp);
frames_to_send = loop;
@@ -2054,7 +2054,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
fd->cmd = 0x80000000 |
*((uint32_t *)((uint8_t *)op +
ses->pdcp.hfn_ovd_offset));
- DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u",
*((uint32_t *)((uint8_t *)op +
ses->pdcp.hfn_ovd_offset)),
ses->pdcp.hfn_ovd);
@@ -2095,7 +2095,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
dpaa_qp->rx_pkts += num_rx;
dpaa_qp->rx_errs += nb_ops - num_rx;
- DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+ DPAA_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
return num_rx;
}
@@ -2158,7 +2158,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
NULL, NULL, NULL, NULL,
SOCKET_ID_ANY, 0);
if (!qp->ctx_pool) {
- DPAA_SEC_ERR("%s create failed\n", str);
+ DPAA_SEC_ERR("%s create failed", str);
return -ENOMEM;
}
} else
@@ -2459,7 +2459,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
session->aead_key.data = rte_zmalloc(NULL, xform->aead.key.length,
RTE_CACHE_LINE_SIZE);
if (session->aead_key.data == NULL && xform->aead.key.length > 0) {
- DPAA_SEC_ERR("No Memory for aead key\n");
+ DPAA_SEC_ERR("No Memory for aead key");
return -ENOMEM;
}
session->aead_key.length = xform->aead.key.length;
@@ -2508,7 +2508,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
for (i = 0; i < RTE_DPAA_MAX_RX_QUEUE; i++) {
if (&qi->inq[i] == fq) {
if (qman_retire_fq(fq, NULL) != 0)
- DPAA_SEC_DEBUG("Queue is not retired\n");
+ DPAA_SEC_DEBUG("Queue is not retired");
qman_oos_fq(fq);
qi->inq_attach[i] = 0;
return 0;
@@ -3483,7 +3483,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
qp->outq.cb.dqrr_dpdk_cb = dpaa_sec_process_atomic_event;
break;
case RTE_SCHED_TYPE_ORDERED:
- DPAA_SEC_ERR("Ordered queue schedule type is not supported\n");
+ DPAA_SEC_ERR("Ordered queue schedule type is not supported");
return -ENOTSUP;
default:
opts.fqd.fq_ctrl |= QM_FQCTRL_AVOIDBLOCK;
@@ -3582,7 +3582,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
dpaa_sec_dp_dump = atoi(value);
if (dpaa_sec_dp_dump > DPAA_SEC_DP_FULL_DUMP) {
DPAA_SEC_WARN("WARN: DPAA_SEC_DP_DUMP_LEVEL is not "
- "supported, changing to FULL error prints\n");
+ "supported, changing to FULL error prints");
dpaa_sec_dp_dump = DPAA_SEC_DP_FULL_DUMP;
}
@@ -3645,7 +3645,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
ret = munmap(internals->sec_hw, MAP_SIZE);
if (ret)
- DPAA_SEC_WARN("munmap failed\n");
+ DPAA_SEC_WARN("munmap failed");
close(map_fd);
cryptodev->driver_id = dpaa_cryptodev_driver_id;
@@ -3713,7 +3713,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
return 0;
init_error:
- DPAA_SEC_ERR("driver %s: create failed\n", cryptodev->data->name);
+ DPAA_SEC_ERR("driver %s: create failed", cryptodev->data->name);
rte_free(cryptodev->security_ctx);
return -EFAULT;
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_log.h b/drivers/crypto/dpaa_sec/dpaa_sec_log.h
index fb895a8bc6..82ac1fa1c4 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_log.h
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_log.h
@@ -29,7 +29,7 @@ extern int dpaa_logtype_sec;
/* DP Logs, toggled out at compile time if level lower than current level */
#define DPAA_SEC_DP_LOG(level, fmt, args...) \
- RTE_LOG_DP(level, PMD, fmt, ## args)
+ RTE_LOG_DP_LINE(level, PMD, fmt, ## args)
#define DPAA_SEC_DP_DEBUG(fmt, args...) \
DPAA_SEC_DP_LOG(DEBUG, fmt, ## args)
diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
index ce49c4996f..f62c803894 100644
--- a/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
+++ b/drivers/crypto/dpaa_sec/dpaa_sec_raw_dp.c
@@ -761,7 +761,7 @@ build_dpaa_raw_proto_sg(uint8_t *drv_ctx,
fd->cmd = 0x80000000 |
*((uint32_t *)((uint8_t *)userdata +
ses->pdcp.hfn_ovd_offset));
- DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u\n",
+ DPAA_SEC_DP_DEBUG("Per packet HFN: %x, ovd:%u",
*((uint32_t *)((uint8_t *)userdata +
ses->pdcp.hfn_ovd_offset)),
ses->pdcp.hfn_ovd);
@@ -806,7 +806,7 @@ dpaa_sec_raw_enqueue_burst(void *qp_data, uint8_t *drv_ctx,
} else if (unlikely(ses->qp[rte_lcore_id() %
MAX_DPAA_CORES] != dpaa_qp)) {
DPAA_SEC_DP_ERR("Old:sess->qp = %p"
- " New qp = %p\n",
+ " New qp = %p",
ses->qp[rte_lcore_id() %
MAX_DPAA_CORES], dpaa_qp);
frames_to_send = loop;
@@ -955,7 +955,7 @@ dpaa_sec_raw_dequeue_burst(void *qp_data, uint8_t *drv_ctx,
*dequeue_status = 1;
*n_success = num_rx;
- DPAA_SEC_DP_DEBUG("SEC Received %d Packets\n", num_rx);
+ DPAA_SEC_DP_DEBUG("SEC Received %d Packets", num_rx);
return num_rx;
}
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.c b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
index f485d130b6..0d2538832d 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.c
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.c
@@ -165,7 +165,7 @@ ipsec_mb_create(struct rte_vdev_device *vdev,
rte_cryptodev_pmd_probing_finish(dev);
- IPSEC_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s\n",
+ IPSEC_MB_LOG(INFO, "IPSec Multi-buffer library version used: %s",
imb_get_version_str());
if (rte_eal_process_type() == RTE_PROC_PRIMARY) {
@@ -176,7 +176,7 @@ ipsec_mb_create(struct rte_vdev_device *vdev,
if (retval)
IPSEC_MB_LOG(ERR,
- "IPSec Multi-buffer register MP request failed.\n");
+ "IPSec Multi-buffer register MP request failed.");
}
return retval;
}
diff --git a/drivers/crypto/ipsec_mb/ipsec_mb_private.h b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
index 52722f94a0..252bcb3192 100644
--- a/drivers/crypto/ipsec_mb/ipsec_mb_private.h
+++ b/drivers/crypto/ipsec_mb/ipsec_mb_private.h
@@ -198,7 +198,7 @@ alloc_init_mb_mgr(void)
IMB_MGR *mb_mgr = alloc_mb_mgr(0);
if (unlikely(mb_mgr == NULL)) {
- IPSEC_MB_LOG(ERR, "Failed to allocate IMB_MGR data\n");
+ IPSEC_MB_LOG(ERR, "Failed to allocate IMB_MGR data");
return NULL;
}
diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 80de25c65b..8e74645e0a 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -107,7 +107,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
uint16_t xcbc_mac_digest_len =
get_truncated_digest_byte_length(IMB_AUTH_AES_XCBC);
if (sess->auth.req_digest_len != xcbc_mac_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -130,7 +130,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
get_digest_byte_length(IMB_AUTH_AES_CMAC);
if (sess->auth.req_digest_len > cmac_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
/*
@@ -165,7 +165,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
if (sess->auth.req_digest_len >
get_digest_byte_length(IMB_AUTH_AES_GMAC)) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -192,7 +192,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
sess->template_job.key_len_in_bytes = IMB_KEY_256_BYTES;
break;
default:
- IPSEC_MB_LOG(ERR, "Invalid authentication key length\n");
+ IPSEC_MB_LOG(ERR, "Invalid authentication key length");
return -EINVAL;
}
sess->template_job.u.GMAC._key = &sess->cipher.gcm_key;
@@ -205,7 +205,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
sess->template_job.hash_alg = IMB_AUTH_ZUC_EIA3_BITLEN;
if (sess->auth.req_digest_len != 4) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
} else if (xform->auth.key.length == 32) {
@@ -217,11 +217,11 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
#else
if (sess->auth.req_digest_len != 4) {
#endif
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
} else {
- IPSEC_MB_LOG(ERR, "Invalid authentication key length\n");
+ IPSEC_MB_LOG(ERR, "Invalid authentication key length");
return -EINVAL;
}
@@ -237,7 +237,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
get_truncated_digest_byte_length(
IMB_AUTH_SNOW3G_UIA2_BITLEN);
if (sess->auth.req_digest_len != snow3g_uia2_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -252,7 +252,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
uint16_t kasumi_f9_digest_len =
get_truncated_digest_byte_length(IMB_AUTH_KASUMI_UIA1);
if (sess->auth.req_digest_len != kasumi_f9_digest_len) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
sess->template_job.auth_tag_output_len_in_bytes = sess->auth.req_digest_len;
@@ -361,7 +361,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
if (sess->auth.req_digest_len > full_digest_size ||
sess->auth.req_digest_len == 0) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
@@ -691,7 +691,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
if (sess->auth.req_digest_len < AES_CCM_DIGEST_MIN_LEN ||
sess->auth.req_digest_len > AES_CCM_DIGEST_MAX_LEN ||
(sess->auth.req_digest_len & 1) == 1) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
break;
@@ -727,7 +727,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
/* GCM digest size must be between 1 and 16 */
if (sess->auth.req_digest_len == 0 ||
sess->auth.req_digest_len > 16) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
break;
@@ -748,7 +748,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
sess->template_job.enc_keys = sess->cipher.expanded_aes_keys.encode;
sess->template_job.dec_keys = sess->cipher.expanded_aes_keys.decode;
if (sess->auth.req_digest_len != 16) {
- IPSEC_MB_LOG(ERR, "Invalid digest size\n");
+ IPSEC_MB_LOG(ERR, "Invalid digest size");
return -EINVAL;
}
break;
@@ -1200,7 +1200,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
total_len = sgl_linear_cipher_auth_len(job, &auth_len);
linear_buf = rte_zmalloc(NULL, total_len + job->auth_tag_output_len_in_bytes, 0);
if (linear_buf == NULL) {
- IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer\n");
+ IPSEC_MB_LOG(ERR, "Error allocating memory for SGL Linear Buffer");
return -1;
}
diff --git a/drivers/crypto/ipsec_mb/pmd_snow3g.c b/drivers/crypto/ipsec_mb/pmd_snow3g.c
index e64df1a462..a0b354bb83 100644
--- a/drivers/crypto/ipsec_mb/pmd_snow3g.c
+++ b/drivers/crypto/ipsec_mb/pmd_snow3g.c
@@ -186,7 +186,7 @@ process_snow3g_cipher_op_bit(struct ipsec_mb_qp *qp,
src = rte_pktmbuf_mtod(op->sym->m_src, uint8_t *);
if (op->sym->m_dst == NULL) {
op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
- IPSEC_MB_LOG(ERR, "bit-level in-place not supported\n");
+ IPSEC_MB_LOG(ERR, "bit-level in-place not supported");
return 0;
}
length_in_bits = op->sym->cipher.data.length;
@@ -317,7 +317,7 @@ process_ops(struct rte_crypto_op **ops, struct snow3g_session *session,
IPSEC_MB_LOG(ERR,
"PMD supports only contiguous mbufs, "
"op (%p) provides noncontiguous mbuf as "
- "source/destination buffer.\n", ops[i]);
+ "source/destination buffer.", ops[i]);
ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
return 0;
}
diff --git a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
index 4647d568de..aa2363ef15 100644
--- a/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
+++ b/drivers/crypto/octeontx/otx_cryptodev_hw_access.h
@@ -211,7 +211,7 @@ otx_cpt_ring_dbell(struct cpt_instance *instance, uint16_t count)
static __rte_always_inline void *
get_cpt_inst(struct command_queue *cqueue)
{
- CPT_LOG_DP_DEBUG("CPT queue idx %u\n", cqueue->idx);
+ CPT_LOG_DP_DEBUG("CPT queue idx %u", cqueue->idx);
return &cqueue->qhead[cqueue->idx * CPT_INST_SIZE];
}
@@ -305,9 +305,9 @@ complete:
" error, MC completion code : 0x%x", user_req,
ret);
}
- CPT_LOG_DP_DEBUG("MC status %.8x\n",
+ CPT_LOG_DP_DEBUG("MC status %.8x",
*((volatile uint32_t *)user_req->alternate_caddr));
- CPT_LOG_DP_DEBUG("HW status %.8x\n",
+ CPT_LOG_DP_DEBUG("HW status %.8x",
*((volatile uint32_t *)user_req->completion_addr));
} else if ((cptres->s8x.compcode == CPT_8X_COMP_E_SWERR) ||
(cptres->s8x.compcode == CPT_8X_COMP_E_FAULT)) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 101111e85b..e10a172f46 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -57,13 +57,13 @@ static void ossl_legacy_provider_load(void)
/* Load Multiple providers into the default (NULL) library context */
legacy = OSSL_PROVIDER_load(NULL, "legacy");
if (legacy == NULL) {
- OPENSSL_LOG(ERR, "Failed to load Legacy provider\n");
+ OPENSSL_LOG(ERR, "Failed to load Legacy provider");
return;
}
deflt = OSSL_PROVIDER_load(NULL, "default");
if (deflt == NULL) {
- OPENSSL_LOG(ERR, "Failed to load Default provider\n");
+ OPENSSL_LOG(ERR, "Failed to load Default provider");
OSSL_PROVIDER_unload(legacy);
return;
}
@@ -2123,7 +2123,7 @@ process_openssl_dsa_sign_op_evp(struct rte_crypto_op *cop,
dsa_sign_data_p = (const unsigned char *)dsa_sign_data;
DSA_SIG *sign = d2i_DSA_SIG(NULL, &dsa_sign_data_p, outlen);
if (!sign) {
- OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, "%s:%d", __func__, __LINE__);
OPENSSL_free(dsa_sign_data);
goto err_dsa_sign;
} else {
@@ -2168,7 +2168,7 @@ process_openssl_dsa_verify_op_evp(struct rte_crypto_op *cop,
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
if (!param_bld) {
- OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, " %s:%d", __func__, __LINE__);
return -1;
}
@@ -2246,7 +2246,7 @@ process_openssl_dsa_sign_op(struct rte_crypto_op *cop,
dsa);
if (sign == NULL) {
- OPENSSL_LOG(ERR, "%s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, "%s:%d", __func__, __LINE__);
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
} else {
const BIGNUM *r = NULL, *s = NULL;
@@ -2275,7 +2275,7 @@ process_openssl_dsa_verify_op(struct rte_crypto_op *cop,
BIGNUM *pub_key = NULL;
if (sign == NULL) {
- OPENSSL_LOG(ERR, " %s:%d\n", __func__, __LINE__);
+ OPENSSL_LOG(ERR, " %s:%d", __func__, __LINE__);
cop->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED;
return -1;
}
@@ -2352,7 +2352,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
if (!OSSL_PARAM_BLD_push_BN(param_bld_peer, OSSL_PKEY_PARAM_PUB_KEY,
pub_key)) {
- OPENSSL_LOG(ERR, "Failed to set public key\n");
+ OPENSSL_LOG(ERR, "Failed to set public key");
OSSL_PARAM_BLD_free(param_bld_peer);
BN_free(pub_key);
return ret;
@@ -2397,7 +2397,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
if (!OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
priv_key)) {
- OPENSSL_LOG(ERR, "Failed to set private key\n");
+ OPENSSL_LOG(ERR, "Failed to set private key");
EVP_PKEY_CTX_free(peer_ctx);
OSSL_PARAM_free(params_peer);
BN_free(pub_key);
@@ -2423,7 +2423,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
goto err_dh;
if (op->ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE) {
- OPENSSL_LOG(DEBUG, "%s:%d updated pub key\n", __func__, __LINE__);
+ OPENSSL_LOG(DEBUG, "%s:%d updated pub key", __func__, __LINE__);
if (!EVP_PKEY_get_bn_param(dhpkey, OSSL_PKEY_PARAM_PUB_KEY, &pub_key))
goto err_dh;
/* output public key */
@@ -2432,7 +2432,7 @@ process_openssl_dh_op_evp(struct rte_crypto_op *cop,
if (op->ke_type == RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE) {
- OPENSSL_LOG(DEBUG, "%s:%d updated priv key\n", __func__, __LINE__);
+ OPENSSL_LOG(DEBUG, "%s:%d updated priv key", __func__, __LINE__);
if (!EVP_PKEY_get_bn_param(dhpkey, OSSL_PKEY_PARAM_PRIV_KEY, &priv_key))
goto err_dh;
@@ -2527,7 +2527,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
}
ret = set_dh_priv_key(dh_key, priv_key);
if (ret) {
- OPENSSL_LOG(ERR, "Failed to set private key\n");
+ OPENSSL_LOG(ERR, "Failed to set private key");
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
BN_free(peer_key);
BN_free(priv_key);
@@ -2574,7 +2574,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
}
ret = set_dh_priv_key(dh_key, priv_key);
if (ret) {
- OPENSSL_LOG(ERR, "Failed to set private key\n");
+ OPENSSL_LOG(ERR, "Failed to set private key");
cop->status = RTE_CRYPTO_OP_STATUS_ERROR;
BN_free(priv_key);
return 0;
@@ -2596,7 +2596,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
if (asym_op->dh.ke_type == RTE_CRYPTO_ASYM_KE_PUB_KEY_GENERATE) {
const BIGNUM *pub_key = NULL;
- OPENSSL_LOG(DEBUG, "%s:%d update public key\n",
+ OPENSSL_LOG(DEBUG, "%s:%d update public key",
__func__, __LINE__);
/* get the generated keys */
@@ -2610,7 +2610,7 @@ process_openssl_dh_op(struct rte_crypto_op *cop,
if (asym_op->dh.ke_type == RTE_CRYPTO_ASYM_KE_PRIV_KEY_GENERATE) {
const BIGNUM *priv_key = NULL;
- OPENSSL_LOG(DEBUG, "%s:%d updated priv key\n",
+ OPENSSL_LOG(DEBUG, "%s:%d updated priv key",
__func__, __LINE__);
/* get the generated keys */
@@ -2719,7 +2719,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
default:
cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
OPENSSL_LOG(ERR,
- "rsa pad type not supported %d\n", pad);
+ "rsa pad type not supported %d", pad);
return ret;
}
@@ -2746,7 +2746,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
op->rsa.cipher.length = outlen;
OPENSSL_LOG(DEBUG,
- "length of encrypted text %zu\n", outlen);
+ "length of encrypted text %zu", outlen);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
@@ -2770,7 +2770,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
goto err_rsa;
op->rsa.message.length = outlen;
- OPENSSL_LOG(DEBUG, "length of decrypted text %zu\n", outlen);
+ OPENSSL_LOG(DEBUG, "length of decrypted text %zu", outlen);
break;
case RTE_CRYPTO_ASYM_OP_SIGN:
@@ -2825,7 +2825,7 @@ process_openssl_rsa_op_evp(struct rte_crypto_op *cop,
OPENSSL_LOG(DEBUG,
"Length of public_decrypt %zu "
- "length of message %zd\n",
+ "length of message %zd",
outlen, op->rsa.message.length);
if (CRYPTO_memcmp(tmp, op->rsa.message.data,
op->rsa.message.length)) {
@@ -3097,7 +3097,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
default:
cop->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS;
OPENSSL_LOG(ERR,
- "rsa pad type not supported %d\n", pad);
+ "rsa pad type not supported %d", pad);
return 0;
}
@@ -3112,7 +3112,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
if (ret > 0)
op->rsa.cipher.length = ret;
OPENSSL_LOG(DEBUG,
- "length of encrypted text %d\n", ret);
+ "length of encrypted text %d", ret);
break;
case RTE_CRYPTO_ASYM_OP_DECRYPT:
@@ -3150,7 +3150,7 @@ process_openssl_rsa_op(struct rte_crypto_op *cop,
OPENSSL_LOG(DEBUG,
"Length of public_decrypt %d "
- "length of message %zd\n",
+ "length of message %zd",
ret, op->rsa.message.length);
if ((ret <= 0) || (CRYPTO_memcmp(tmp, op->rsa.message.data,
op->rsa.message.length))) {
diff --git a/drivers/crypto/openssl/rte_openssl_pmd_ops.c b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
index 1bbb855a59..b7b612fc57 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd_ops.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd_ops.c
@@ -892,7 +892,7 @@ static int openssl_set_asym_session_parameters(
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
OSSL_PARAM_BLD * param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_rsa;
}
@@ -900,7 +900,7 @@ static int openssl_set_asym_session_parameters(
|| !OSSL_PARAM_BLD_push_BN(param_bld,
OSSL_PKEY_PARAM_RSA_E, e)) {
OSSL_PARAM_BLD_free(param_bld);
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_rsa;
}
@@ -1033,14 +1033,14 @@ static int openssl_set_asym_session_parameters(
ret = set_rsa_params(rsa, p, q);
if (ret) {
OPENSSL_LOG(ERR,
- "failed to set rsa params\n");
+ "failed to set rsa params");
RSA_free(rsa);
goto err_rsa;
}
ret = set_rsa_crt_params(rsa, dmp1, dmq1, iqmp);
if (ret) {
OPENSSL_LOG(ERR,
- "failed to set crt params\n");
+ "failed to set crt params");
RSA_free(rsa);
/*
* set already populated params to NULL
@@ -1053,7 +1053,7 @@ static int openssl_set_asym_session_parameters(
ret = set_rsa_keys(rsa, n, e, d);
if (ret) {
- OPENSSL_LOG(ERR, "Failed to load rsa keys\n");
+ OPENSSL_LOG(ERR, "Failed to load rsa keys");
RSA_free(rsa);
return ret;
}
@@ -1080,7 +1080,7 @@ err_rsa:
BN_CTX *ctx = BN_CTX_new();
if (ctx == NULL) {
OPENSSL_LOG(ERR,
- " failed to allocate resources\n");
+ " failed to allocate resources");
return ret;
}
BN_CTX_start(ctx);
@@ -1111,7 +1111,7 @@ err_rsa:
BN_CTX *ctx = BN_CTX_new();
if (ctx == NULL) {
OPENSSL_LOG(ERR,
- " failed to allocate resources\n");
+ " failed to allocate resources");
return ret;
}
BN_CTX_start(ctx);
@@ -1152,7 +1152,7 @@ err_rsa:
OSSL_PARAM_BLD *param_bld = NULL;
param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_dh;
}
if ((!OSSL_PARAM_BLD_push_utf8_string(param_bld,
@@ -1168,7 +1168,7 @@ err_rsa:
OSSL_PARAM_BLD *param_bld_peer = NULL;
param_bld_peer = OSSL_PARAM_BLD_new();
if (!param_bld_peer) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
OSSL_PARAM_BLD_free(param_bld);
goto err_dh;
}
@@ -1203,7 +1203,7 @@ err_rsa:
dh = DH_new();
if (dh == NULL) {
OPENSSL_LOG(ERR,
- "failed to allocate resources\n");
+ "failed to allocate resources");
goto err_dh;
}
ret = set_dh_params(dh, p, g);
@@ -1217,7 +1217,7 @@ err_rsa:
break;
err_dh:
- OPENSSL_LOG(ERR, " failed to set dh params\n");
+ OPENSSL_LOG(ERR, " failed to set dh params");
#if (OPENSSL_VERSION_NUMBER >= 0x30000000L)
BN_free(*p);
BN_free(*g);
@@ -1263,7 +1263,7 @@ err_dh:
param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_dsa;
}
@@ -1273,7 +1273,7 @@ err_dh:
|| !OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
*priv_key)) {
OSSL_PARAM_BLD_free(param_bld);
- OPENSSL_LOG(ERR, "failed to allocate resources\n");
+ OPENSSL_LOG(ERR, "failed to allocate resources");
goto err_dsa;
}
asym_session->xfrm_type = RTE_CRYPTO_ASYM_XFORM_DSA;
@@ -1313,14 +1313,14 @@ err_dh:
DSA *dsa = DSA_new();
if (dsa == NULL) {
OPENSSL_LOG(ERR,
- " failed to allocate resources\n");
+ " failed to allocate resources");
goto err_dsa;
}
ret = set_dsa_params(dsa, p, q, g);
if (ret) {
DSA_free(dsa);
- OPENSSL_LOG(ERR, "Failed to dsa params\n");
+ OPENSSL_LOG(ERR, "Failed to dsa params");
goto err_dsa;
}
@@ -1334,7 +1334,7 @@ err_dh:
ret = set_dsa_keys(dsa, pub_key, priv_key);
if (ret) {
DSA_free(dsa);
- OPENSSL_LOG(ERR, "Failed to set keys\n");
+ OPENSSL_LOG(ERR, "Failed to set keys");
goto err_dsa;
}
asym_session->u.s.dsa = dsa;
@@ -1369,21 +1369,21 @@ err_dsa:
param_bld = OSSL_PARAM_BLD_new();
if (!param_bld) {
- OPENSSL_LOG(ERR, "failed to allocate params\n");
+ OPENSSL_LOG(ERR, "failed to allocate params");
goto err_sm2;
}
ret = OSSL_PARAM_BLD_push_utf8_string(param_bld,
OSSL_ASYM_CIPHER_PARAM_DIGEST, "SM3", 0);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
ret = OSSL_PARAM_BLD_push_utf8_string(param_bld,
OSSL_PKEY_PARAM_GROUP_NAME, "SM2", 0);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
@@ -1393,7 +1393,7 @@ err_dsa:
ret = OSSL_PARAM_BLD_push_BN(param_bld, OSSL_PKEY_PARAM_PRIV_KEY,
pkey_bn);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
@@ -1408,13 +1408,13 @@ err_dsa:
ret = OSSL_PARAM_BLD_push_octet_string(param_bld,
OSSL_PKEY_PARAM_PUB_KEY, pubkey, len);
if (!ret) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
params = OSSL_PARAM_BLD_to_param(param_bld);
if (!params) {
- OPENSSL_LOG(ERR, "failed to push params\n");
+ OPENSSL_LOG(ERR, "failed to push params");
goto err_sm2;
}
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 2bf3060278..5d240a3de1 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -1520,7 +1520,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s",
qat_pci_dev->name, "asym");
- QAT_LOG(DEBUG, "Creating QAT ASYM device %s\n", name);
+ QAT_LOG(DEBUG, "Creating QAT ASYM device %s", name);
if (gen_dev_ops->cryptodev_ops == NULL) {
QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
diff --git a/drivers/crypto/qat/qat_sym_session.c b/drivers/crypto/qat/qat_sym_session.c
index 9f4f6c3d93..224cc0ab50 100644
--- a/drivers/crypto/qat/qat_sym_session.c
+++ b/drivers/crypto/qat/qat_sym_session.c
@@ -569,7 +569,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
ret = -ENOTSUP;
goto error_out;
default:
- QAT_LOG(ERR, "Crypto: Undefined Cipher specified %u\n",
+ QAT_LOG(ERR, "Crypto: Undefined Cipher specified %u",
cipher_xform->algo);
ret = -EINVAL;
goto error_out;
@@ -1073,7 +1073,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
aead_xform);
break;
default:
- QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u\n",
+ QAT_LOG(ERR, "Crypto: Undefined AEAD specified %u",
aead_xform->algo);
return -EINVAL;
}
@@ -1676,7 +1676,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
err = imb_get_errno(m);
if (err)
- QAT_LOG(ERR, "Error: %s!\n", imb_get_strerror(err));
+ QAT_LOG(ERR, "Error: %s!", imb_get_strerror(err));
return -EFAULT;
}
@@ -2480,10 +2480,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
&state2_size, cdesc->aes_cmac);
#endif
if (ret) {
- cdesc->aes_cmac ? QAT_LOG(ERR,
- "(CMAC)precompute failed")
- : QAT_LOG(ERR,
- "(XCBC)precompute failed");
+ QAT_LOG(ERR, "(%s)precompute failed",
+ cdesc->aes_cmac ? "CMAC" : "XCBC");
return -EFAULT;
}
break;
diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c
index 824383512e..e4b1a32398 100644
--- a/drivers/crypto/uadk/uadk_crypto_pmd.c
+++ b/drivers/crypto/uadk/uadk_crypto_pmd.c
@@ -634,7 +634,7 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
setup.sched_param = ¶ms;
sess->handle_cipher = wd_cipher_alloc_sess(&setup);
if (!sess->handle_cipher) {
- UADK_LOG(ERR, "uadk failed to alloc session!\n");
+ UADK_LOG(ERR, "uadk failed to alloc session!");
ret = -EINVAL;
goto env_uninit;
}
@@ -642,7 +642,7 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev,
ret = wd_cipher_set_key(sess->handle_cipher, cipher->key.data, cipher->key.length);
if (ret) {
wd_cipher_free_sess(sess->handle_cipher);
- UADK_LOG(ERR, "uadk failed to set key!\n");
+ UADK_LOG(ERR, "uadk failed to set key!");
ret = -EINVAL;
goto env_uninit;
}
@@ -734,7 +734,7 @@ uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
setup.sched_param = ¶ms;
sess->handle_digest = wd_digest_alloc_sess(&setup);
if (!sess->handle_digest) {
- UADK_LOG(ERR, "uadk failed to alloc session!\n");
+ UADK_LOG(ERR, "uadk failed to alloc session!");
ret = -EINVAL;
goto env_uninit;
}
@@ -745,7 +745,7 @@ uadk_set_session_auth_parameters(struct rte_cryptodev *dev,
xform->auth.key.data,
xform->auth.key.length);
if (ret) {
- UADK_LOG(ERR, "uadk failed to alloc session!\n");
+ UADK_LOG(ERR, "uadk failed to alloc session!");
wd_digest_free_sess(sess->handle_digest);
sess->handle_digest = 0;
ret = -EINVAL;
diff --git a/drivers/crypto/virtio/virtio_cryptodev.c b/drivers/crypto/virtio/virtio_cryptodev.c
index 4854820ba6..c0d3178b71 100644
--- a/drivers/crypto/virtio/virtio_cryptodev.c
+++ b/drivers/crypto/virtio/virtio_cryptodev.c
@@ -591,7 +591,7 @@ virtio_crypto_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id,
qp_conf->nb_descriptors, socket_id, &vq);
if (ret < 0) {
VIRTIO_CRYPTO_INIT_LOG_ERR(
- "virtio crypto data queue initialization failed\n");
+ "virtio crypto data queue initialization failed");
return ret;
}
diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c
index 10e65ef1d7..3d4fd818f8 100644
--- a/drivers/dma/dpaa/dpaa_qdma.c
+++ b/drivers/dma/dpaa/dpaa_qdma.c
@@ -295,7 +295,7 @@ static struct fsl_qdma_queue
for (i = 0; i < queue_num; i++) {
if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
- DPAA_QDMA_ERR("Get wrong queue-sizes.\n");
+ DPAA_QDMA_ERR("Get wrong queue-sizes.");
goto fail;
}
queue_temp = queue_head + i + (j * queue_num);
@@ -345,7 +345,7 @@ fsl_qdma_queue *fsl_qdma_prep_status_queue(void)
status_size = QDMA_STATUS_SIZE;
if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX ||
status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) {
- DPAA_QDMA_ERR("Get wrong status_size.\n");
+ DPAA_QDMA_ERR("Get wrong status_size.");
return NULL;
}
@@ -643,7 +643,7 @@ fsl_qdma_alloc_chan_resources(struct fsl_qdma_chan *fsl_chan)
FSL_QDMA_COMMAND_BUFFER_SIZE, 64);
if (ret) {
DPAA_QDMA_ERR(
- "failed to alloc dma buffer for comp descriptor\n");
+ "failed to alloc dma buffer for comp descriptor");
goto exit;
}
@@ -779,7 +779,7 @@ dpaa_qdma_enqueue(void *dev_private, uint16_t vchan,
(dma_addr_t)dst, (dma_addr_t)src,
length, NULL, NULL);
if (!fsl_comp) {
- DPAA_QDMA_DP_DEBUG("fsl_comp is NULL\n");
+ DPAA_QDMA_DP_DEBUG("fsl_comp is NULL");
return -1;
}
ret = fsl_qdma_enqueue_desc(fsl_chan, fsl_comp, flags);
@@ -803,19 +803,19 @@ dpaa_qdma_dequeue_status(void *dev_private, uint16_t vchan,
intr = qdma_readl_be(status + FSL_QDMA_DEDR);
if (intr) {
- DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+ DPAA_QDMA_ERR("DMA transaction error! %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECBR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x", intr);
qdma_writel(0xffffffff,
status + FSL_QDMA_DEDR);
intr = qdma_readl(status + FSL_QDMA_DEDR);
@@ -849,19 +849,19 @@ dpaa_qdma_dequeue(void *dev_private,
intr = qdma_readl_be(status + FSL_QDMA_DEDR);
if (intr) {
- DPAA_QDMA_ERR("DMA transaction error! %x\n", intr);
+ DPAA_QDMA_ERR("DMA transaction error! %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW0R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW0R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW1R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW1R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW2R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW2R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFDW3R);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFDW3R %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECFQIDR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECFQIDR %x", intr);
intr = qdma_readl(status + FSL_QDMA_DECBR);
- DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x\n", intr);
+ DPAA_QDMA_INFO("reg FSL_QDMA_DECBR %x", intr);
qdma_writel(0xffffffff,
status + FSL_QDMA_DEDR);
intr = qdma_readl(status + FSL_QDMA_DEDR);
@@ -974,7 +974,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
close(ccsr_qdma_fd);
if (fsl_qdma->ctrl_base == MAP_FAILED) {
DPAA_QDMA_ERR("Can not map CCSR base qdma: Phys: %08" PRIx64
- "size %d\n", phys_addr, regs_size);
+ "size %d", phys_addr, regs_size);
goto err;
}
@@ -998,7 +998,7 @@ dpaa_qdma_init(struct rte_dma_dev *dmadev)
ret = fsl_qdma_reg_init(fsl_qdma);
if (ret) {
- DPAA_QDMA_ERR("Can't Initialize the qDMA engine.\n");
+ DPAA_QDMA_ERR("Can't Initialize the qDMA engine.");
munmap(fsl_qdma->ctrl_base, regs_size);
goto err;
}
diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c
index 2c91ceec13..5780e49297 100644
--- a/drivers/dma/dpaa2/dpaa2_qdma.c
+++ b/drivers/dma/dpaa2/dpaa2_qdma.c
@@ -578,7 +578,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_QDMA_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -608,7 +608,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
while (1) {
if (qbman_swp_pull(swp, &pulldesc)) {
DPAA2_QDMA_DP_WARN(
- "VDQ command not issued.QBMAN busy\n");
+ "VDQ command not issued.QBMAN busy");
/* Portal was busy, try again */
continue;
}
@@ -684,7 +684,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq,
while (1) {
if (qbman_swp_pull(swp, &pulldesc)) {
DPAA2_QDMA_DP_WARN(
- "VDQ command is not issued. QBMAN is busy (2)\n");
+ "VDQ command is not issued. QBMAN is busy (2)");
continue;
}
break;
@@ -728,7 +728,7 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_QDMA_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -825,7 +825,7 @@ dpdmai_dev_submit_multi(struct qdma_virt_queue *qdma_vq,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_QDMA_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
diff --git a/drivers/dma/hisilicon/hisi_dmadev.c b/drivers/dma/hisilicon/hisi_dmadev.c
index 4db3b0554c..8bc076f5d5 100644
--- a/drivers/dma/hisilicon/hisi_dmadev.c
+++ b/drivers/dma/hisilicon/hisi_dmadev.c
@@ -358,7 +358,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
struct hisi_dma_dev *hw = dev->data->dev_private;
if (hw->iomz == NULL) {
- HISI_DMA_ERR(hw, "Vchan was not setup, start fail!\n");
+ HISI_DMA_ERR(hw, "Vchan was not setup, start fail!");
return -EINVAL;
}
@@ -631,7 +631,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
* status array indexed by csq_head. Only error logs
* are used for prompting.
*/
- HISI_DMA_ERR(hw, "invalid csq_head:%u!\n", csq_head);
+ HISI_DMA_ERR(hw, "invalid csq_head:%u!", csq_head);
count = 0;
break;
}
@@ -913,7 +913,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
rte_pci_device_name(&pci_dev->addr, name, sizeof(name));
if (pci_dev->mem_resource[2].addr == NULL) {
- HISI_DMA_LOG(ERR, "%s BAR2 is NULL!\n", name);
+ HISI_DMA_LOG(ERR, "%s BAR2 is NULL!", name);
return -ENODEV;
}
diff --git a/drivers/dma/idxd/idxd_common.c b/drivers/dma/idxd/idxd_common.c
index 83d53942eb..dc2e8cd432 100644
--- a/drivers/dma/idxd/idxd_common.c
+++ b/drivers/dma/idxd/idxd_common.c
@@ -616,7 +616,7 @@ idxd_dmadev_create(const char *name, struct rte_device *dev,
sizeof(idxd->batch_comp_ring[0])) * (idxd->max_batches + 1),
sizeof(idxd->batch_comp_ring[0]), dev->numa_node);
if (idxd->batch_comp_ring == NULL) {
- IDXD_PMD_ERR("Unable to reserve memory for batch data\n");
+ IDXD_PMD_ERR("Unable to reserve memory for batch data");
ret = -ENOMEM;
goto cleanup;
}
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index a78889a7ef..2ee78773bb 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -323,7 +323,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
/* look up queue 0 to get the PCI structure */
snprintf(qname, sizeof(qname), "%s-q0", name);
- IDXD_PMD_INFO("Looking up %s\n", qname);
+ IDXD_PMD_INFO("Looking up %s", qname);
ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
if (ret != 0) {
IDXD_PMD_ERR("Failed to create dmadev %s", name);
@@ -338,7 +338,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
for (qid = 1; qid < max_qid; qid++) {
/* add the queue number to each device name */
snprintf(qname, sizeof(qname), "%s-q%d", name, qid);
- IDXD_PMD_INFO("Looking up %s\n", qname);
+ IDXD_PMD_INFO("Looking up %s", qname);
ret = idxd_dmadev_create(qname, &dev->device, NULL, &idxd_pci_ops);
if (ret != 0) {
IDXD_PMD_ERR("Failed to create dmadev %s", name);
@@ -364,7 +364,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
return ret;
}
if (idxd.u.pci->portals == NULL) {
- IDXD_PMD_ERR("Error, invalid portal assigned during initialization\n");
+ IDXD_PMD_ERR("Error, invalid portal assigned during initialization");
free(idxd.u.pci);
return -EINVAL;
}
diff --git a/drivers/dma/ioat/ioat_dmadev.c b/drivers/dma/ioat/ioat_dmadev.c
index 5fc14bcf22..8b7ff5652f 100644
--- a/drivers/dma/ioat/ioat_dmadev.c
+++ b/drivers/dma/ioat/ioat_dmadev.c
@@ -156,12 +156,12 @@ ioat_dev_start(struct rte_dma_dev *dev)
ioat->offset = 0;
ioat->failure = 0;
- IOAT_PMD_DEBUG("channel status - %s [0x%"PRIx64"]\n",
+ IOAT_PMD_DEBUG("channel status - %s [0x%"PRIx64"]",
chansts_readable[ioat->status & IOAT_CHANSTS_STATUS],
ioat->status);
if ((ioat->regs->chansts & IOAT_CHANSTS_STATUS) == IOAT_CHANSTS_HALTED) {
- IOAT_PMD_WARN("Device HALTED on start, attempting to recover\n");
+ IOAT_PMD_WARN("Device HALTED on start, attempting to recover");
if (__ioat_recover(ioat) != 0) {
IOAT_PMD_ERR("Device couldn't be recovered");
return -1;
@@ -469,7 +469,7 @@ ioat_completed(void *dev_private, uint16_t qid __rte_unused, const uint16_t max_
ioat->failure = ioat->regs->chanerr;
ioat->next_read = read + count + 1;
if (__ioat_recover(ioat) != 0) {
- IOAT_PMD_ERR("Device HALTED and could not be recovered\n");
+ IOAT_PMD_ERR("Device HALTED and could not be recovered");
__dev_dump(dev_private, stdout);
return 0;
}
@@ -515,7 +515,7 @@ ioat_completed_status(void *dev_private, uint16_t qid __rte_unused,
count++;
ioat->next_read = read + count;
if (__ioat_recover(ioat) != 0) {
- IOAT_PMD_ERR("Device HALTED and could not be recovered\n");
+ IOAT_PMD_ERR("Device HALTED and could not be recovered");
__dev_dump(dev_private, stdout);
return 0;
}
@@ -652,12 +652,12 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)
/* Do device initialization - reset and set error behaviour. */
if (ioat->regs->chancnt != 1)
- IOAT_PMD_WARN("%s: Channel count == %d\n", __func__,
+ IOAT_PMD_WARN("%s: Channel count == %d", __func__,
ioat->regs->chancnt);
/* Locked by someone else. */
if (ioat->regs->chanctrl & IOAT_CHANCTRL_CHANNEL_IN_USE) {
- IOAT_PMD_WARN("%s: Channel appears locked\n", __func__);
+ IOAT_PMD_WARN("%s: Channel appears locked", __func__);
ioat->regs->chanctrl = 0;
}
@@ -676,7 +676,7 @@ ioat_dmadev_create(const char *name, struct rte_pci_device *dev)
rte_delay_ms(1);
if (++retry >= 200) {
IOAT_PMD_ERR("%s: cannot reset device. CHANCMD=%#"PRIx8
- ", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32"\n",
+ ", CHANSTS=%#"PRIx64", CHANERR=%#"PRIx32,
__func__,
ioat->regs->chancmd,
ioat->regs->chansts,
diff --git a/drivers/event/cnxk/cnxk_tim_evdev.c b/drivers/event/cnxk/cnxk_tim_evdev.c
index 6d59fdf909..bba70646fa 100644
--- a/drivers/event/cnxk/cnxk_tim_evdev.c
+++ b/drivers/event/cnxk/cnxk_tim_evdev.c
@@ -268,7 +268,7 @@ cnxk_tim_ring_create(struct rte_event_timer_adapter *adptr)
sso_set_priv_mem_fn(dev->event_dev, NULL);
plt_tim_dbg(
- "Total memory used %" PRIu64 "MB\n",
+ "Total memory used %" PRIu64 "MB",
(uint64_t)(((tim_ring->nb_chunks * tim_ring->chunk_sz) +
(tim_ring->nb_bkts * sizeof(struct cnxk_tim_bkt))) /
BIT_ULL(20)));
diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c
index 5044cb17ef..9dc5edb3fb 100644
--- a/drivers/event/dlb2/dlb2.c
+++ b/drivers/event/dlb2/dlb2.c
@@ -168,7 +168,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
ret = dlb2_iface_get_num_resources(handle,
&dlb2->hw_rsrc_query_results);
if (ret) {
- DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d\n", ret);
+ DLB2_LOG_ERR("ioctl get dlb2 num resources, err=%d", ret);
return ret;
}
@@ -256,7 +256,7 @@ set_producer_coremask(const char *key __rte_unused,
const char **mask_str = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -290,7 +290,7 @@ set_max_cq_depth(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -301,7 +301,7 @@ set_max_cq_depth(const char *key __rte_unused,
if (*max_cq_depth < DLB2_MIN_CQ_DEPTH_OVERRIDE ||
*max_cq_depth > DLB2_MAX_CQ_DEPTH_OVERRIDE ||
!rte_is_power_of_2(*max_cq_depth)) {
- DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2\n",
+ DLB2_LOG_ERR("dlb2: max_cq_depth %d and %d and a power of 2",
DLB2_MIN_CQ_DEPTH_OVERRIDE,
DLB2_MAX_CQ_DEPTH_OVERRIDE);
return -EINVAL;
@@ -319,7 +319,7 @@ set_max_enq_depth(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -330,7 +330,7 @@ set_max_enq_depth(const char *key __rte_unused,
if (*max_enq_depth < DLB2_MIN_ENQ_DEPTH_OVERRIDE ||
*max_enq_depth > DLB2_MAX_ENQ_DEPTH_OVERRIDE ||
!rte_is_power_of_2(*max_enq_depth)) {
- DLB2_LOG_ERR("dlb2: max_enq_depth %d and %d and a power of 2\n",
+ DLB2_LOG_ERR("dlb2: max_enq_depth %d and %d and a power of 2",
DLB2_MIN_ENQ_DEPTH_OVERRIDE,
DLB2_MAX_ENQ_DEPTH_OVERRIDE);
return -EINVAL;
@@ -348,7 +348,7 @@ set_max_num_events(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -358,7 +358,7 @@ set_max_num_events(const char *key __rte_unused,
if (*max_num_events < 0 || *max_num_events >
DLB2_MAX_NUM_LDB_CREDITS) {
- DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d\n",
+ DLB2_LOG_ERR("dlb2: max_num_events must be between 0 and %d",
DLB2_MAX_NUM_LDB_CREDITS);
return -EINVAL;
}
@@ -375,7 +375,7 @@ set_num_dir_credits(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -385,7 +385,7 @@ set_num_dir_credits(const char *key __rte_unused,
if (*num_dir_credits < 0 ||
*num_dir_credits > DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2)) {
- DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d\n",
+ DLB2_LOG_ERR("dlb2: num_dir_credits must be between 0 and %d",
DLB2_MAX_NUM_DIR_CREDITS(DLB2_HW_V2));
return -EINVAL;
}
@@ -402,7 +402,7 @@ set_dev_id(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -422,7 +422,7 @@ set_poll_interval(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -442,7 +442,7 @@ set_port_cos(const char *key __rte_unused,
int first, last, cos_id, i;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -455,18 +455,18 @@ set_port_cos(const char *key __rte_unused,
} else if (sscanf(value, "%d:%d", &first, &cos_id) == 2) {
last = first;
} else {
- DLB2_LOG_ERR("Error parsing ldb port port_cos devarg. Should be port-port:val, or port:val\n");
+ DLB2_LOG_ERR("Error parsing ldb port port_cos devarg. Should be port-port:val, or port:val");
return -EINVAL;
}
if (first > last || first < 0 ||
last >= DLB2_MAX_NUM_LDB_PORTS) {
- DLB2_LOG_ERR("Error parsing ldb port cos_id arg, invalid port value\n");
+ DLB2_LOG_ERR("Error parsing ldb port cos_id arg, invalid port value");
return -EINVAL;
}
if (cos_id < DLB2_COS_0 || cos_id > DLB2_COS_3) {
- DLB2_LOG_ERR("Error parsing ldb port cos_id devarg, must be between 0 and 4\n");
+ DLB2_LOG_ERR("Error parsing ldb port cos_id devarg, must be between 0 and 4");
return -EINVAL;
}
@@ -484,7 +484,7 @@ set_cos_bw(const char *key __rte_unused,
struct dlb2_cos_bw *cos_bw = opaque;
if (opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -492,11 +492,11 @@ set_cos_bw(const char *key __rte_unused,
if (sscanf(value, "%d:%d:%d:%d", &cos_bw->val[0], &cos_bw->val[1],
&cos_bw->val[2], &cos_bw->val[3]) != 4) {
- DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100\n");
+ DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100");
return -EINVAL;
}
if (cos_bw->val[0] + cos_bw->val[1] + cos_bw->val[2] + cos_bw->val[3] > 100) {
- DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100\n");
+ DLB2_LOG_ERR("Error parsing cos bandwidth devarg. Should be bw0:bw1:bw2:bw3 where all values combined are <= 100");
return -EINVAL;
}
@@ -512,7 +512,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -521,7 +521,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
return ret;
if (*sw_credit_quanta <= 0) {
- DLB2_LOG_ERR("sw_credit_quanta must be > 0\n");
+ DLB2_LOG_ERR("sw_credit_quanta must be > 0");
return -EINVAL;
}
@@ -537,7 +537,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -557,7 +557,7 @@ set_default_depth_thresh(const char *key __rte_unused,
int ret;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -576,7 +576,7 @@ set_vector_opts_enab(const char *key __rte_unused,
bool *dlb2_vector_opts_enabled = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -596,7 +596,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
bool *default_ldb_port_allocation = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -616,7 +616,7 @@ set_enable_cq_weight(const char *key __rte_unused,
bool *enable_cq_weight = opaque;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -637,7 +637,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
int first, last, thresh, i;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -654,18 +654,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
last = first;
} else {
- DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val");
return -EINVAL;
}
if (first > last || first < 0 ||
last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2)) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value");
return -EINVAL;
}
if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+ DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d",
DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
return -EINVAL;
}
@@ -685,7 +685,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
int first, last, thresh, i;
if (value == NULL || opaque == NULL) {
- DLB2_LOG_ERR("NULL pointer\n");
+ DLB2_LOG_ERR("NULL pointer");
return -EINVAL;
}
@@ -702,18 +702,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
} else if (sscanf(value, "%d:%d", &first, &thresh) == 2) {
last = first;
} else {
- DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg. Should be all:val, qid-qid:val, or qid:val");
return -EINVAL;
}
if (first > last || first < 0 ||
last >= DLB2_MAX_NUM_QUEUES(DLB2_HW_V2_5)) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value\n");
+ DLB2_LOG_ERR("Error parsing qid depth devarg, invalid qid value");
return -EINVAL;
}
if (thresh < 0 || thresh > DLB2_MAX_QUEUE_DEPTH_THRESHOLD) {
- DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d\n",
+ DLB2_LOG_ERR("Error parsing qid depth devarg, threshold > %d",
DLB2_MAX_QUEUE_DEPTH_THRESHOLD);
return -EINVAL;
}
@@ -735,7 +735,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
if (ret) {
const struct rte_eventdev_data *data = dev->data;
- DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
+ DLB2_LOG_ERR("get resources err=%d, devid=%d",
ret, data->dev_id);
/* fn is void, so fall through and return values set up in
* probe
@@ -778,7 +778,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
struct dlb2_create_sched_domain_args *cfg;
if (resources_asked == NULL) {
- DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter\n");
+ DLB2_LOG_ERR("dlb2: dlb2_create NULL parameter");
ret = EINVAL;
goto error_exit;
}
@@ -806,7 +806,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
if (cos_ports > resources_asked->num_ldb_ports ||
(cos_ports && dlb2->max_cos_port >= resources_asked->num_ldb_ports)) {
- DLB2_LOG_ERR("dlb2: num_ldb_ports < cos_ports\n");
+ DLB2_LOG_ERR("dlb2: num_ldb_ports < cos_ports");
ret = EINVAL;
goto error_exit;
}
@@ -851,7 +851,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_sched_domain_create(handle, cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s\n",
+ DLB2_LOG_ERR("dlb2: domain create failed, ret = %d, extra status: %s",
ret,
dlb2_error_strings[cfg->response.status]);
@@ -927,27 +927,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
dlb2_hw_reset_sched_domain(dev, true);
ret = dlb2_hw_query_resources(dlb2);
if (ret) {
- DLB2_LOG_ERR("get resources err=%d, devid=%d\n",
+ DLB2_LOG_ERR("get resources err=%d, devid=%d",
ret, data->dev_id);
return ret;
}
}
if (config->nb_event_queues > rsrcs->num_queues) {
- DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).\n",
+ DLB2_LOG_ERR("nb_event_queues parameter (%d) exceeds the QM device's capabilities (%d).",
config->nb_event_queues,
rsrcs->num_queues);
return -EINVAL;
}
if (config->nb_event_ports > (rsrcs->num_ldb_ports
+ rsrcs->num_dir_ports)) {
- DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).\n",
+ DLB2_LOG_ERR("nb_event_ports parameter (%d) exceeds the QM device's capabilities (%d).",
config->nb_event_ports,
(rsrcs->num_ldb_ports + rsrcs->num_dir_ports));
return -EINVAL;
}
if (config->nb_events_limit > rsrcs->nb_events_limit) {
- DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).\n",
+ DLB2_LOG_ERR("nb_events_limit parameter (%d) exceeds the QM device's capabilities (%d).",
config->nb_events_limit,
rsrcs->nb_events_limit);
return -EINVAL;
@@ -997,7 +997,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
if (dlb2_hw_create_sched_domain(dlb2, handle, rsrcs,
dlb2->version) < 0) {
- DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed\n");
+ DLB2_LOG_ERR("dlb2_hw_create_sched_domain failed");
return -ENODEV;
}
@@ -1065,7 +1065,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
ret = dlb2_iface_get_sn_allocation(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_sn_allocation ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -1085,7 +1085,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
ret = dlb2_iface_set_sn_allocation(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: set_sn_allocation ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -1104,7 +1104,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
ret = dlb2_iface_get_sn_occupancy(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_sn_occupancy ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -1158,7 +1158,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
}
if (i == DLB2_NUM_SN_GROUPS) {
- DLB2_LOG_ERR("[%s()] No groups with %d sequence_numbers are available or have free slots\n",
+ DLB2_LOG_ERR("[%s()] No groups with %d sequence_numbers are available or have free slots",
__func__, sequence_numbers);
return;
}
@@ -1233,7 +1233,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_ldb_queue_create(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: create LB event queue error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return -EINVAL;
}
@@ -1269,7 +1269,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
qm_qid = dlb2_hw_create_ldb_queue(dlb2, ev_queue, queue_conf);
if (qm_qid < 0) {
- DLB2_LOG_ERR("Failed to create the load-balanced queue\n");
+ DLB2_LOG_ERR("Failed to create the load-balanced queue");
return qm_qid;
}
@@ -1377,7 +1377,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
RTE_CACHE_LINE_SIZE);
if (qe == NULL) {
- DLB2_LOG_ERR("dlb2: no memory for consume_qe\n");
+ DLB2_LOG_ERR("dlb2: no memory for consume_qe");
return -ENOMEM;
}
qm_port->consume_qe = qe;
@@ -1409,7 +1409,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
RTE_CACHE_LINE_SIZE);
if (qe == NULL) {
- DLB2_LOG_ERR("dlb2: no memory for complete_qe\n");
+ DLB2_LOG_ERR("dlb2: no memory for complete_qe");
return -ENOMEM;
}
qm_port->int_arm_qe = qe;
@@ -1437,20 +1437,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
qm_port->qe4 = rte_zmalloc(mz_name, sz, RTE_CACHE_LINE_SIZE);
if (qm_port->qe4 == NULL) {
- DLB2_LOG_ERR("dlb2: no qe4 memory\n");
+ DLB2_LOG_ERR("dlb2: no qe4 memory");
ret = -ENOMEM;
goto error_exit;
}
ret = dlb2_init_int_arm_qe(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: dlb2_init_int_arm_qe ret=%d", ret);
goto error_exit;
}
ret = dlb2_init_consume_qe(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_init_consume_qe ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: dlb2_init_consume_qe ret=%d", ret);
goto error_exit;
}
@@ -1533,14 +1533,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
return -EINVAL;
if (dequeue_depth < DLB2_MIN_CQ_DEPTH) {
- DLB2_LOG_ERR("dlb2: invalid cq depth, must be at least %d\n",
+ DLB2_LOG_ERR("dlb2: invalid cq depth, must be at least %d",
DLB2_MIN_CQ_DEPTH);
return -EINVAL;
}
if (dlb2->version == DLB2_HW_V2 && ev_port->cq_weight != 0 &&
ev_port->cq_weight > dequeue_depth) {
- DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d\n",
+ DLB2_LOG_ERR("dlb2: invalid cq dequeue depth %d, must be >= cq weight %d",
dequeue_depth, ev_port->cq_weight);
return -EINVAL;
}
@@ -1576,7 +1576,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_ldb_port_create(handle, &cfg, dlb2->poll_mode);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_ldb_port_create error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: dlb2_ldb_port_create error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
goto error_exit;
}
@@ -1599,7 +1599,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
ret = dlb2_init_qe_mem(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d", ret);
goto error_exit;
}
@@ -1612,7 +1612,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_enable_cq_weight(handle, &cq_weight_args);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)",
ret,
dlb2_error_strings[cfg.response. status]);
goto error_exit;
@@ -1714,7 +1714,7 @@ error_exit:
rte_spinlock_unlock(&handle->resource_lock);
- DLB2_LOG_ERR("dlb2: create ldb port failed!\n");
+ DLB2_LOG_ERR("dlb2: create ldb port failed!");
return ret;
}
@@ -1758,13 +1758,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
return -EINVAL;
if (dequeue_depth < DLB2_MIN_CQ_DEPTH) {
- DLB2_LOG_ERR("dlb2: invalid dequeue_depth, must be %d-%d\n",
+ DLB2_LOG_ERR("dlb2: invalid dequeue_depth, must be %d-%d",
DLB2_MIN_CQ_DEPTH, DLB2_MAX_INPUT_QUEUE_DEPTH);
return -EINVAL;
}
if (enqueue_depth < DLB2_MIN_ENQUEUE_DEPTH) {
- DLB2_LOG_ERR("dlb2: invalid enqueue_depth, must be at least %d\n",
+ DLB2_LOG_ERR("dlb2: invalid enqueue_depth, must be at least %d",
DLB2_MIN_ENQUEUE_DEPTH);
return -EINVAL;
}
@@ -1799,7 +1799,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_dir_port_create(handle, &cfg, dlb2->poll_mode);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: dlb2_dir_port_create error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
goto error_exit;
}
@@ -1824,7 +1824,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
ret = dlb2_init_qe_mem(qm_port, mz_name);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d\n", ret);
+ DLB2_LOG_ERR("dlb2: init_qe_mem failed, ret=%d", ret);
goto error_exit;
}
@@ -1913,7 +1913,7 @@ error_exit:
rte_spinlock_unlock(&handle->resource_lock);
- DLB2_LOG_ERR("dlb2: create dir port failed!\n");
+ DLB2_LOG_ERR("dlb2: create dir port failed!");
return ret;
}
@@ -1929,7 +1929,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
int ret;
if (dev == NULL || port_conf == NULL) {
- DLB2_LOG_ERR("Null parameter\n");
+ DLB2_LOG_ERR("Null parameter");
return -EINVAL;
}
@@ -1947,7 +1947,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
ev_port = &dlb2->ev_ports[ev_port_id];
/* configured? */
if (ev_port->setup_done) {
- DLB2_LOG_ERR("evport %d is already configured\n", ev_port_id);
+ DLB2_LOG_ERR("evport %d is already configured", ev_port_id);
return -EINVAL;
}
@@ -1979,7 +1979,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
if (port_conf->enqueue_depth > sw_credit_quanta ||
port_conf->enqueue_depth > hw_credit_quanta) {
- DLB2_LOG_ERR("Invalid port config. Enqueue depth %d must be <= credit quanta %d and batch size %d\n",
+ DLB2_LOG_ERR("Invalid port config. Enqueue depth %d must be <= credit quanta %d and batch size %d",
port_conf->enqueue_depth,
sw_credit_quanta,
hw_credit_quanta);
@@ -2001,7 +2001,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
port_conf->dequeue_depth,
port_conf->enqueue_depth);
if (ret < 0) {
- DLB2_LOG_ERR("Failed to create the lB port ve portId=%d\n",
+ DLB2_LOG_ERR("Failed to create the lB port ve portId=%d",
ev_port_id);
return ret;
@@ -2012,7 +2012,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
port_conf->dequeue_depth,
port_conf->enqueue_depth);
if (ret < 0) {
- DLB2_LOG_ERR("Failed to create the DIR port\n");
+ DLB2_LOG_ERR("Failed to create the DIR port");
return ret;
}
}
@@ -2079,9 +2079,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
ret = dlb2_iface_map_qid(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: map qid error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: map qid error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
- DLB2_LOG_ERR("dlb2: grp=%d, qm_port=%d, qm_qid=%d prio=%d\n",
+ DLB2_LOG_ERR("dlb2: grp=%d, qm_port=%d, qm_qid=%d prio=%d",
handle->domain_id, cfg.port_id,
cfg.qid,
cfg.priority);
@@ -2114,7 +2114,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
first_avail = i;
}
if (first_avail == -1) {
- DLB2_LOG_ERR("dlb2: qm_port %d has no available QID slots.\n",
+ DLB2_LOG_ERR("dlb2: qm_port %d has no available QID slots.",
ev_port->qm_port.id);
return -EINVAL;
}
@@ -2151,7 +2151,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_dir_queue_create(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: create DIR event queue error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: create DIR event queue error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return -EINVAL;
}
@@ -2169,7 +2169,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
qm_qid = dlb2_hw_create_dir_queue(dlb2, ev_queue, ev_port->qm_port.id);
if (qm_qid < 0) {
- DLB2_LOG_ERR("Failed to create the DIR queue\n");
+ DLB2_LOG_ERR("Failed to create the DIR queue");
return qm_qid;
}
@@ -2199,7 +2199,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
err = dlb2_event_queue_join_ldb(dlb2, ev_port, ev_queue, prio);
if (err) {
- DLB2_LOG_ERR("port link failure for %s ev_q %d, ev_port %d\n",
+ DLB2_LOG_ERR("port link failure for %s ev_q %d, ev_port %d",
ev_queue->qm_queue.is_directed ? "DIR" : "LDB",
ev_queue->id, ev_port->id);
@@ -2237,7 +2237,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
queue_is_dir = ev_queue->qm_queue.is_directed;
if (port_is_dir != queue_is_dir) {
- DLB2_LOG_ERR("%s queue %u can't link to %s port %u\n",
+ DLB2_LOG_ERR("%s queue %u can't link to %s port %u",
queue_is_dir ? "DIR" : "LDB", ev_queue->id,
port_is_dir ? "DIR" : "LDB", ev_port->id);
@@ -2247,7 +2247,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
/* Check if there is space for the requested link */
if (!link_exists && index == -1) {
- DLB2_LOG_ERR("no space for new link\n");
+ DLB2_LOG_ERR("no space for new link");
rte_errno = -ENOSPC;
return -1;
}
@@ -2255,7 +2255,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
/* Check if the directed port is already linked */
if (ev_port->qm_port.is_directed && ev_port->num_links > 0 &&
!link_exists) {
- DLB2_LOG_ERR("Can't link DIR port %d to >1 queues\n",
+ DLB2_LOG_ERR("Can't link DIR port %d to >1 queues",
ev_port->id);
rte_errno = -EINVAL;
return -1;
@@ -2264,7 +2264,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
/* Check if the directed queue is already linked */
if (ev_queue->qm_queue.is_directed && ev_queue->num_links > 0 &&
!link_exists) {
- DLB2_LOG_ERR("Can't link DIR queue %d to >1 ports\n",
+ DLB2_LOG_ERR("Can't link DIR queue %d to >1 ports",
ev_queue->id);
rte_errno = -EINVAL;
return -1;
@@ -2286,14 +2286,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
RTE_SET_USED(dev);
if (ev_port == NULL) {
- DLB2_LOG_ERR("dlb2: evport not setup\n");
+ DLB2_LOG_ERR("dlb2: evport not setup");
rte_errno = -EINVAL;
return 0;
}
if (!ev_port->setup_done &&
ev_port->qm_port.config_state != DLB2_PREV_CONFIGURED) {
- DLB2_LOG_ERR("dlb2: evport not setup\n");
+ DLB2_LOG_ERR("dlb2: evport not setup");
rte_errno = -EINVAL;
return 0;
}
@@ -2378,7 +2378,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
ret = dlb2_iface_unmap_qid(handle, &cfg);
if (ret < 0)
- DLB2_LOG_ERR("dlb2: unmap qid error, ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: unmap qid error, ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
@@ -2431,7 +2431,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
RTE_SET_USED(dev);
if (!ev_port->setup_done) {
- DLB2_LOG_ERR("dlb2: evport %d is not configured\n",
+ DLB2_LOG_ERR("dlb2: evport %d is not configured",
ev_port->id);
rte_errno = -EINVAL;
return 0;
@@ -2456,7 +2456,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
int ret, j;
if (queues[i] >= dlb2->num_queues) {
- DLB2_LOG_ERR("dlb2: invalid queue id %d\n", queues[i]);
+ DLB2_LOG_ERR("dlb2: invalid queue id %d", queues[i]);
rte_errno = -EINVAL;
return i; /* return index of offending queue */
}
@@ -2474,7 +2474,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
ret = dlb2_event_queue_detach_ldb(dlb2, ev_port, ev_queue);
if (ret) {
- DLB2_LOG_ERR("unlink err=%d for port %d queue %d\n",
+ DLB2_LOG_ERR("unlink err=%d for port %d queue %d",
ret, ev_port->id, queues[i]);
rte_errno = -ENOENT;
return i; /* return index of offending queue */
@@ -2501,7 +2501,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
RTE_SET_USED(dev);
if (!ev_port->setup_done) {
- DLB2_LOG_ERR("dlb2: evport %d is not configured\n",
+ DLB2_LOG_ERR("dlb2: evport %d is not configured",
ev_port->id);
rte_errno = -EINVAL;
return 0;
@@ -2513,7 +2513,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
ret = dlb2_iface_pending_port_unmaps(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: num_unlinks_in_progress ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: num_unlinks_in_progress ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -2606,7 +2606,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
rte_spinlock_lock(&dlb2->qm_instance.resource_lock);
if (dlb2->run_state != DLB2_RUN_STATE_STOPPED) {
- DLB2_LOG_ERR("bad state %d for dev_start\n",
+ DLB2_LOG_ERR("bad state %d for dev_start",
(int)dlb2->run_state);
rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
return -EINVAL;
@@ -2642,7 +2642,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
ret = dlb2_iface_sched_domain_start(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: sched_domain_start ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: sched_domain_start ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -2887,7 +2887,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
case RTE_SCHED_TYPE_ORDERED:
DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
if (qm_queue->sched_type != RTE_SCHED_TYPE_ORDERED) {
- DLB2_LOG_ERR("dlb2: tried to send ordered event to unordered queue %d\n",
+ DLB2_LOG_ERR("dlb2: tried to send ordered event to unordered queue %d",
*queue_id);
rte_errno = -EINVAL;
return 1;
@@ -2906,7 +2906,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
*sched_type = DLB2_SCHED_UNORDERED;
break;
default:
- DLB2_LOG_ERR("Unsupported LDB sched type in put_qe\n");
+ DLB2_LOG_ERR("Unsupported LDB sched type in put_qe");
DLB2_INC_STAT(ev_port->stats.tx_invalid, 1);
rte_errno = -EINVAL;
return 1;
@@ -3153,7 +3153,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
int i;
if (port_id > dlb2->num_ports) {
- DLB2_LOG_ERR("Invalid port id %d in dlb2-event_release\n",
+ DLB2_LOG_ERR("Invalid port id %d in dlb2-event_release",
port_id);
rte_errno = -EINVAL;
return;
@@ -3210,7 +3210,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
sw_credit_update:
/* each release returns one credit */
if (unlikely(!ev_port->outstanding_releases)) {
- DLB2_LOG_ERR("%s: Outstanding releases underflowed.\n",
+ DLB2_LOG_ERR("%s: Outstanding releases underflowed.",
__func__);
return;
}
@@ -3364,7 +3364,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
* buffer is a mbuf.
*/
if (unlikely(qe->error)) {
- DLB2_LOG_ERR("QE error bit ON\n");
+ DLB2_LOG_ERR("QE error bit ON");
DLB2_INC_STAT(ev_port->stats.traffic.rx_drop, 1);
dlb2_consume_qe_immediate(qm_port, 1);
continue; /* Ignore */
@@ -4278,7 +4278,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_get_ldb_queue_depth(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_ldb_queue_depth ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_ldb_queue_depth ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -4298,7 +4298,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
ret = dlb2_iface_get_dir_queue_depth(handle, &cfg);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2: get_dir_queue_depth ret=%d (driver status: %s)\n",
+ DLB2_LOG_ERR("dlb2: get_dir_queue_depth ret=%d (driver status: %s)",
ret, dlb2_error_strings[cfg.response.status]);
return ret;
}
@@ -4389,7 +4389,7 @@ dlb2_drain(struct rte_eventdev *dev)
}
if (i == dlb2->num_ports) {
- DLB2_LOG_ERR("internal error: no LDB ev_ports\n");
+ DLB2_LOG_ERR("internal error: no LDB ev_ports");
return;
}
@@ -4397,7 +4397,7 @@ dlb2_drain(struct rte_eventdev *dev)
rte_event_port_unlink(dev_id, ev_port->id, NULL, 0);
if (rte_errno) {
- DLB2_LOG_ERR("internal error: failed to unlink ev_port %d\n",
+ DLB2_LOG_ERR("internal error: failed to unlink ev_port %d",
ev_port->id);
return;
}
@@ -4415,7 +4415,7 @@ dlb2_drain(struct rte_eventdev *dev)
/* Link the ev_port to the queue */
ret = rte_event_port_link(dev_id, ev_port->id, &qid, &prio, 1);
if (ret != 1) {
- DLB2_LOG_ERR("internal error: failed to link ev_port %d to queue %d\n",
+ DLB2_LOG_ERR("internal error: failed to link ev_port %d to queue %d",
ev_port->id, qid);
return;
}
@@ -4430,7 +4430,7 @@ dlb2_drain(struct rte_eventdev *dev)
/* Unlink the ev_port from the queue */
ret = rte_event_port_unlink(dev_id, ev_port->id, &qid, 1);
if (ret != 1) {
- DLB2_LOG_ERR("internal error: failed to unlink ev_port %d to queue %d\n",
+ DLB2_LOG_ERR("internal error: failed to unlink ev_port %d to queue %d",
ev_port->id, qid);
return;
}
@@ -4449,7 +4449,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
return;
} else if (dlb2->run_state != DLB2_RUN_STATE_STARTED) {
- DLB2_LOG_ERR("Internal error: bad state %d for dev_stop\n",
+ DLB2_LOG_ERR("Internal error: bad state %d for dev_stop",
(int)dlb2->run_state);
rte_spinlock_unlock(&dlb2->qm_instance.resource_lock);
return;
@@ -4605,7 +4605,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_open(&dlb2->qm_instance, name);
if (err < 0) {
- DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
+ DLB2_LOG_ERR("could not open event hardware device, err=%d",
err);
return err;
}
@@ -4613,14 +4613,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_get_device_version(&dlb2->qm_instance,
&dlb2->revision);
if (err < 0) {
- DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d\n",
+ DLB2_LOG_ERR("dlb2: failed to get the device version, err=%d",
err);
return err;
}
err = dlb2_hw_query_resources(dlb2);
if (err) {
- DLB2_LOG_ERR("get resources err=%d for %s\n",
+ DLB2_LOG_ERR("get resources err=%d for %s",
err, name);
return err;
}
@@ -4643,7 +4643,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
break;
}
if (ret) {
- DLB2_LOG_ERR("dlb2: failed to configure class of service, err=%d\n",
+ DLB2_LOG_ERR("dlb2: failed to configure class of service, err=%d",
err);
return err;
}
@@ -4651,7 +4651,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_get_cq_poll_mode(&dlb2->qm_instance, &dlb2->poll_mode);
if (err < 0) {
- DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d\n",
+ DLB2_LOG_ERR("dlb2: failed to get the poll mode, err=%d",
err);
return err;
}
@@ -4659,7 +4659,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
/* Complete xtstats runtime initialization */
err = dlb2_xstats_init(dlb2);
if (err) {
- DLB2_LOG_ERR("dlb2: failed to init xstats, err=%d\n", err);
+ DLB2_LOG_ERR("dlb2: failed to init xstats, err=%d", err);
return err;
}
@@ -4689,14 +4689,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
err = dlb2_iface_open(&dlb2->qm_instance, name);
if (err < 0) {
- DLB2_LOG_ERR("could not open event hardware device, err=%d\n",
+ DLB2_LOG_ERR("could not open event hardware device, err=%d",
err);
return err;
}
err = dlb2_hw_query_resources(dlb2);
if (err) {
- DLB2_LOG_ERR("get resources err=%d for %s\n",
+ DLB2_LOG_ERR("get resources err=%d for %s",
err, name);
return err;
}
diff --git a/drivers/event/dlb2/dlb2_xstats.c b/drivers/event/dlb2/dlb2_xstats.c
index ff15271dda..28de48e24e 100644
--- a/drivers/event/dlb2/dlb2_xstats.c
+++ b/drivers/event/dlb2/dlb2_xstats.c
@@ -766,7 +766,7 @@ dlb2_xstats_update(struct dlb2_eventdev *dlb2,
fn = get_queue_stat;
break;
default:
- DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs->fn_id);
+ DLB2_LOG_ERR("Unexpected xstat fn_id %d", xs->fn_id);
goto invalid_value;
}
@@ -827,7 +827,7 @@ dlb2_eventdev_xstats_get_by_name(const struct rte_eventdev *dev,
fn = get_queue_stat;
break;
default:
- DLB2_LOG_ERR("Unexpected xstat fn_id %d\n",
+ DLB2_LOG_ERR("Unexpected xstat fn_id %d",
xs->fn_id);
return (uint64_t)-1;
}
@@ -865,7 +865,7 @@ dlb2_xstats_reset_range(struct dlb2_eventdev *dlb2, uint32_t start,
fn = get_queue_stat;
break;
default:
- DLB2_LOG_ERR("Unexpected xstat fn_id %d\n", xs->fn_id);
+ DLB2_LOG_ERR("Unexpected xstat fn_id %d", xs->fn_id);
return;
}
diff --git a/drivers/event/dlb2/pf/dlb2_main.c b/drivers/event/dlb2/pf/dlb2_main.c
index a95d3227a4..89eabc2a93 100644
--- a/drivers/event/dlb2/pf/dlb2_main.c
+++ b/drivers/event/dlb2/pf/dlb2_main.c
@@ -72,7 +72,7 @@ static int dlb2_pf_wait_for_device_ready(struct dlb2_dev *dlb2_dev,
};
if (retries == DLB2_READY_RETRY_LIMIT) {
- DLB2_LOG_ERR("[%s()] wait for device ready timed out\n",
+ DLB2_LOG_ERR("[%s()] wait for device ready timed out",
__func__);
return -1;
}
@@ -214,7 +214,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
pcie_cap_offset = rte_pci_find_capability(pdev, RTE_PCI_CAP_ID_EXP);
if (pcie_cap_offset < 0) {
- DLB2_LOG_ERR("[%s()] failed to find the pcie capability\n",
+ DLB2_LOG_ERR("[%s()] failed to find the pcie capability",
__func__);
return pcie_cap_offset;
}
@@ -261,7 +261,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = RTE_PCI_COMMAND;
cmd = 0;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pci command\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pci command",
__func__);
return ret;
}
@@ -273,7 +273,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVSTA;
ret = rte_pci_read_config(pdev, &devsta_busy_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to read the pci device status\n",
+ DLB2_LOG_ERR("[%s()] failed to read the pci device status",
__func__);
return ret;
}
@@ -286,7 +286,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
}
if (wait_count == 4) {
- DLB2_LOG_ERR("[%s()] wait for pci pending transactions timed out\n",
+ DLB2_LOG_ERR("[%s()] wait for pci pending transactions timed out",
__func__);
return -1;
}
@@ -294,7 +294,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL;
ret = rte_pci_read_config(pdev, &devctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to read the pcie device control\n",
+ DLB2_LOG_ERR("[%s()] failed to read the pcie device control",
__func__);
return ret;
}
@@ -303,7 +303,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &devctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie device control\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie device control",
__func__);
return ret;
}
@@ -316,7 +316,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL;
ret = rte_pci_write_config(pdev, &dev_ctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie device control at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie device control at offset %d",
__func__, (int)off);
return ret;
}
@@ -324,7 +324,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_LNKCTL;
ret = rte_pci_write_config(pdev, &lnk_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -332,7 +332,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_SLTCTL;
ret = rte_pci_write_config(pdev, &slt_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -340,7 +340,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_RTCTL;
ret = rte_pci_write_config(pdev, &rt_ctl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -348,7 +348,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_DEVCTL2;
ret = rte_pci_write_config(pdev, &dev_ctl2_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -356,7 +356,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_LNKCTL2;
ret = rte_pci_write_config(pdev, &lnk_word2, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -364,7 +364,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pcie_cap_offset + RTE_PCI_EXP_SLTCTL2;
ret = rte_pci_write_config(pdev, &slt_word2, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -376,7 +376,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pri_cap_offset + RTE_PCI_PRI_ALLOC_REQ;
ret = rte_pci_write_config(pdev, &pri_reqs_dword, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -384,7 +384,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = pri_cap_offset + RTE_PCI_PRI_CTRL;
ret = rte_pci_write_config(pdev, &pri_ctrl_word, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -402,7 +402,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &tmp, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -413,7 +413,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &tmp, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -424,7 +424,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &tmp, 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -434,7 +434,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = (i - 1) * 4;
ret = rte_pci_write_config(pdev, &dword[i - 1], 4, off);
if (ret != 4) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -444,7 +444,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
cmd &= ~RTE_PCI_COMMAND_INTX_DISABLE;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pci command\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pci command",
__func__);
return ret;
}
@@ -457,7 +457,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
cmd |= RTE_PCI_MSIX_FLAGS_ENABLE;
cmd |= RTE_PCI_MSIX_FLAGS_MASKALL;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write msix flags\n",
+ DLB2_LOG_ERR("[%s()] failed to write msix flags",
__func__);
return ret;
}
@@ -467,7 +467,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
if (rte_pci_read_config(pdev, &cmd, 2, off) == 2) {
cmd &= ~RTE_PCI_MSIX_FLAGS_MASKALL;
if (rte_pci_write_config(pdev, &cmd, 2, off) != 2) {
- DLB2_LOG_ERR("[%s()] failed to write msix flags\n",
+ DLB2_LOG_ERR("[%s()] failed to write msix flags",
__func__);
return ret;
}
@@ -493,7 +493,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -509,7 +509,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
off = acs_cap_offset + RTE_PCI_ACS_CTRL;
ret = rte_pci_write_config(pdev, &acs_ctrl, 2, off);
if (ret != 2) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return ret;
}
@@ -520,7 +520,7 @@ dlb2_pf_reset(struct dlb2_dev *dlb2_dev)
*/
off = DLB2_PCI_PASID_CAP_OFFSET;
if (rte_pci_pasid_set_state(pdev, off, false) < 0) {
- DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d\n",
+ DLB2_LOG_ERR("[%s()] failed to write the pcie config space at offset %d",
__func__, (int)off);
return -1;
}
diff --git a/drivers/event/dlb2/pf/dlb2_pf.c b/drivers/event/dlb2/pf/dlb2_pf.c
index 3d15250e11..019e90f7e7 100644
--- a/drivers/event/dlb2/pf/dlb2_pf.c
+++ b/drivers/event/dlb2/pf/dlb2_pf.c
@@ -336,7 +336,7 @@ dlb2_pf_ldb_port_create(struct dlb2_hw_dev *handle,
/* Lock the page in memory */
ret = rte_mem_lock_page(port_base);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o\n");
+ DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o");
goto create_port_err;
}
@@ -411,7 +411,7 @@ dlb2_pf_dir_port_create(struct dlb2_hw_dev *handle,
/* Lock the page in memory */
ret = rte_mem_lock_page(port_base);
if (ret < 0) {
- DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o\n");
+ DLB2_LOG_ERR("dlb2 pf pmd could not lock page for device i/o");
goto create_port_err;
}
@@ -737,7 +737,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
&dlb2_args,
dlb2->version);
if (ret) {
- DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d\n",
+ DLB2_LOG_ERR("PFPMD failed to parse args ret=%d, errno=%d",
ret, rte_errno);
goto dlb2_probe_failed;
}
@@ -748,7 +748,7 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
dlb2->qm_instance.pf_dev = dlb2_probe(pci_dev, probe_args);
if (dlb2->qm_instance.pf_dev == NULL) {
- DLB2_LOG_ERR("DLB2 PF Probe failed with error %d\n",
+ DLB2_LOG_ERR("DLB2 PF Probe failed with error %d",
rte_errno);
ret = -rte_errno;
goto dlb2_probe_failed;
@@ -766,13 +766,13 @@ dlb2_eventdev_pci_init(struct rte_eventdev *eventdev)
if (ret)
goto dlb2_probe_failed;
- DLB2_LOG_INFO("DLB2 PF Probe success\n");
+ DLB2_LOG_INFO("DLB2 PF Probe success");
return 0;
dlb2_probe_failed:
- DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d\n", ret);
+ DLB2_LOG_INFO("DLB2 PF Probe failed, ret=%d", ret);
return ret;
}
@@ -811,7 +811,7 @@ event_dlb2_pci_probe(struct rte_pci_driver *pci_drv,
event_dlb2_pf_name);
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
@@ -826,7 +826,7 @@ event_dlb2_pci_remove(struct rte_pci_device *pci_dev)
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
@@ -845,7 +845,7 @@ event_dlb2_5_pci_probe(struct rte_pci_driver *pci_drv,
event_dlb2_pf_name);
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_probe_named() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
@@ -860,7 +860,7 @@ event_dlb2_5_pci_remove(struct rte_pci_device *pci_dev)
if (ret) {
DLB2_LOG_INFO("rte_event_pmd_pci_remove() failed, "
- "ret=%d\n", ret);
+ "ret=%d", ret);
}
return ret;
diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c
index dd4e64395f..4658eaf3a2 100644
--- a/drivers/event/dpaa2/dpaa2_eventdev.c
+++ b/drivers/event/dpaa2/dpaa2_eventdev.c
@@ -74,7 +74,7 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[],
ret = dpaa2_affine_qbman_swp();
if (ret < 0) {
DPAA2_EVENTDEV_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -276,7 +276,7 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[],
ret = dpaa2_affine_qbman_swp();
if (ret < 0) {
DPAA2_EVENTDEV_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -849,7 +849,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
for (i = 0; i < cryptodev->data->nb_queue_pairs; i++) {
ret = dpaa2_sec_eventq_attach(cryptodev, i, dpcon, ev);
if (ret) {
- DPAA2_EVENTDEV_ERR("dpaa2_sec_eventq_attach failed: ret %d\n",
+ DPAA2_EVENTDEV_ERR("dpaa2_sec_eventq_attach failed: ret %d",
ret);
goto fail;
}
@@ -883,7 +883,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
dpcon, &conf->ev);
if (ret) {
DPAA2_EVENTDEV_ERR(
- "dpaa2_sec_eventq_attach failed: ret: %d\n", ret);
+ "dpaa2_sec_eventq_attach failed: ret: %d", ret);
return ret;
}
return 0;
@@ -903,7 +903,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
ret = dpaa2_sec_eventq_detach(cdev, i);
if (ret) {
DPAA2_EVENTDEV_ERR(
- "dpaa2_sec_eventq_detach failed:ret %d\n", ret);
+ "dpaa2_sec_eventq_detach failed:ret %d", ret);
return ret;
}
}
@@ -926,7 +926,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
ret = dpaa2_sec_eventq_detach(cryptodev, rx_queue_id);
if (ret) {
DPAA2_EVENTDEV_ERR(
- "dpaa2_sec_eventq_detach failed: ret: %d\n", ret);
+ "dpaa2_sec_eventq_detach failed: ret: %d", ret);
return ret;
}
@@ -1159,7 +1159,7 @@ dpaa2_eventdev_destroy(const char *name)
eventdev = rte_event_pmd_get_named_dev(name);
if (eventdev == NULL) {
- RTE_EDEV_LOG_ERR("eventdev with name %s not allocated", name);
+ DPAA2_EVENTDEV_ERR("eventdev with name %s not allocated", name);
return -1;
}
diff --git a/drivers/event/octeontx/timvf_evdev.c b/drivers/event/octeontx/timvf_evdev.c
index 090b3ed183..82f17144a6 100644
--- a/drivers/event/octeontx/timvf_evdev.c
+++ b/drivers/event/octeontx/timvf_evdev.c
@@ -196,7 +196,7 @@ timvf_ring_start(const struct rte_event_timer_adapter *adptr)
timr->tck_int = NSEC2CLK(timr->tck_nsec, rte_get_timer_hz());
timr->fast_div = rte_reciprocal_value_u64(timr->tck_int);
timvf_log_info("nb_bkts %d min_ns %"PRIu64" min_cyc %"PRIu64""
- " maxtmo %"PRIu64"\n",
+ " maxtmo %"PRIu64,
timr->nb_bkts, timr->tck_nsec, interval,
timr->max_tout);
diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c
index 0cccaf7e97..fe0c0ede6f 100644
--- a/drivers/event/opdl/opdl_evdev.c
+++ b/drivers/event/opdl/opdl_evdev.c
@@ -99,7 +99,7 @@ opdl_port_link(struct rte_eventdev *dev,
if (unlikely(dev->data->dev_started)) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to link queue (%u) to port %d while device started\n",
+ "Attempt to link queue (%u) to port %d while device started",
dev->data->dev_id,
queues[0],
p->id);
@@ -110,7 +110,7 @@ opdl_port_link(struct rte_eventdev *dev,
/* Max of 1 queue per port */
if (num > 1) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to link more than one queue (%u) to port %d requested\n",
+ "Attempt to link more than one queue (%u) to port %d requested",
dev->data->dev_id,
num,
p->id);
@@ -120,7 +120,7 @@ opdl_port_link(struct rte_eventdev *dev,
if (!p->configured) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "port %d not configured, cannot link to %u\n",
+ "port %d not configured, cannot link to %u",
dev->data->dev_id,
p->id,
queues[0]);
@@ -130,7 +130,7 @@ opdl_port_link(struct rte_eventdev *dev,
if (p->external_qid != OPDL_INVALID_QID) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "port %d already linked to queue %u, cannot link to %u\n",
+ "port %d already linked to queue %u, cannot link to %u",
dev->data->dev_id,
p->id,
p->external_qid,
@@ -157,7 +157,7 @@ opdl_port_unlink(struct rte_eventdev *dev,
if (unlikely(dev->data->dev_started)) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to unlink queue (%u) to port %d while device started\n",
+ "Attempt to unlink queue (%u) to port %d while device started",
dev->data->dev_id,
queues[0],
p->id);
@@ -188,7 +188,7 @@ opdl_port_setup(struct rte_eventdev *dev,
/* Check if port already configured */
if (p->configured) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Attempt to setup port %d which is already setup\n",
+ "Attempt to setup port %d which is already setup",
dev->data->dev_id,
p->id);
return -EDQUOT;
@@ -244,7 +244,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
/* Extra sanity check, probably not needed */
if (queue_id == OPDL_INVALID_QID) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Invalid queue id %u requested\n",
+ "Invalid queue id %u requested",
dev->data->dev_id,
queue_id);
return -EINVAL;
@@ -252,7 +252,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
if (device->nb_q_md > device->max_queue_nb) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Max number of queues %u exceeded by request %u\n",
+ "Max number of queues %u exceeded by request %u",
dev->data->dev_id,
device->max_queue_nb,
device->nb_q_md);
@@ -262,7 +262,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
& conf->event_queue_cfg) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "QUEUE_CFG_ALL_TYPES not supported\n",
+ "QUEUE_CFG_ALL_TYPES not supported",
dev->data->dev_id);
return -ENOTSUP;
} else if (RTE_EVENT_QUEUE_CFG_SINGLE_LINK
@@ -281,7 +281,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
break;
default:
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "Unknown queue type %d requested\n",
+ "Unknown queue type %d requested",
dev->data->dev_id,
conf->event_queue_cfg);
return -EINVAL;
@@ -292,7 +292,7 @@ opdl_queue_setup(struct rte_eventdev *dev,
for (i = 0; i < device->nb_q_md; i++) {
if (device->q_md[i].ext_id == queue_id) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "queue id %u already setup\n",
+ "queue id %u already setup",
dev->data->dev_id,
queue_id);
return -EINVAL;
@@ -352,7 +352,7 @@ opdl_dev_configure(const struct rte_eventdev *dev)
if (conf->event_dev_cfg & RTE_EVENT_DEV_CFG_PER_DEQUEUE_TIMEOUT) {
PMD_DRV_LOG(ERR, "DEV_ID:[%02d] : "
- "DEQUEUE_TIMEOUT not supported\n",
+ "DEQUEUE_TIMEOUT not supported",
dev->data->dev_id);
return -ENOTSUP;
}
@@ -659,7 +659,7 @@ opdl_probe(struct rte_vdev_device *vdev)
if (!kvlist) {
PMD_DRV_LOG(INFO,
- "Ignoring unsupported parameters when creating device '%s'\n",
+ "Ignoring unsupported parameters when creating device '%s'",
name);
} else {
int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
@@ -706,7 +706,7 @@ opdl_probe(struct rte_vdev_device *vdev)
PMD_DRV_LOG(INFO, "DEV_ID:[%02d] : "
"Success - creating eventdev device %s, numa_node:[%d], do_validation:[%s]"
- " , self_test:[%s]\n",
+ " , self_test:[%s]",
dev->data->dev_id,
name,
socket_id,
@@ -750,7 +750,7 @@ opdl_remove(struct rte_vdev_device *vdev)
if (name == NULL)
return -EINVAL;
- PMD_DRV_LOG(INFO, "Closing eventdev opdl device %s\n", name);
+ PMD_DRV_LOG(INFO, "Closing eventdev opdl device %s", name);
return rte_event_pmd_vdev_uninit(name);
}
diff --git a/drivers/event/opdl/opdl_test.c b/drivers/event/opdl/opdl_test.c
index b69c4769dc..9b0c4db5ce 100644
--- a/drivers/event/opdl/opdl_test.c
+++ b/drivers/event/opdl/opdl_test.c
@@ -101,7 +101,7 @@ init(struct test *t, int nb_queues, int nb_ports)
ret = rte_event_dev_configure(evdev, &config);
if (ret < 0)
- PMD_DRV_LOG(ERR, "%d: Error configuring device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error configuring device", __LINE__);
return ret;
};
@@ -119,7 +119,7 @@ create_ports(struct test *t, int num_ports)
for (i = 0; i < num_ports; i++) {
if (rte_event_port_setup(evdev, i, &conf) < 0) {
- PMD_DRV_LOG(ERR, "Error setting up port %d\n", i);
+ PMD_DRV_LOG(ERR, "Error setting up port %d", i);
return -1;
}
t->port[i] = i;
@@ -158,7 +158,7 @@ create_queues_type(struct test *t, int num_qids, enum queue_type flags)
for (i = t->nb_qids ; i < t->nb_qids + num_qids; i++) {
if (rte_event_queue_setup(evdev, i, &conf) < 0) {
- PMD_DRV_LOG(ERR, "%d: error creating qid %d\n ",
+ PMD_DRV_LOG(ERR, "%d: error creating qid %d ",
__LINE__, i);
return -1;
}
@@ -180,7 +180,7 @@ cleanup(struct test *t __rte_unused)
{
rte_event_dev_stop(evdev);
rte_event_dev_close(evdev);
- PMD_DRV_LOG(ERR, "clean up for test done\n");
+ PMD_DRV_LOG(ERR, "clean up for test done");
return 0;
};
@@ -202,7 +202,7 @@ ordered_basic(struct test *t)
if (init(t, 2, tx_port+1) < 0 ||
create_ports(t, tx_port+1) < 0 ||
create_queues_type(t, 2, OPDL_Q_TYPE_ORDERED)) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -226,7 +226,7 @@ ordered_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n",
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid",
__LINE__);
cleanup(t);
return -1;
@@ -236,13 +236,13 @@ ordered_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[tx_port], &t->qid[1], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping TX qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping TX qid", __LINE__);
cleanup(t);
return -1;
}
if (rte_event_dev_start(evdev) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error with start call\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error with start call", __LINE__);
return -1;
}
/* Enqueue 3 packets to the rx port */
@@ -250,7 +250,7 @@ ordered_basic(struct test *t)
struct rte_event ev;
mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
if (!mbufs[i]) {
- PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
return -1;
}
@@ -262,7 +262,7 @@ ordered_basic(struct test *t)
/* generate pkt and enqueue */
err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
__LINE__, i, err);
return -1;
}
@@ -278,7 +278,7 @@ ordered_basic(struct test *t)
deq_pkts = rte_event_dequeue_burst(evdev, t->port[i],
&deq_ev[i], 1, 0);
if (deq_pkts != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to deq\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Failed to deq", __LINE__);
rte_event_dev_dump(evdev, stdout);
return -1;
}
@@ -286,7 +286,7 @@ ordered_basic(struct test *t)
if (seq != (i-1)) {
PMD_DRV_LOG(ERR, " seq test failed ! eq is %d , "
- "port number is %u\n", seq, i);
+ "port number is %u", seq, i);
return -1;
}
}
@@ -298,7 +298,7 @@ ordered_basic(struct test *t)
deq_ev[i].queue_id = t->qid[1];
err = rte_event_enqueue_burst(evdev, t->port[i], &deq_ev[i], 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue", __LINE__);
return -1;
}
}
@@ -309,7 +309,7 @@ ordered_basic(struct test *t)
/* Check to see if we've got all 3 packets */
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d\n",
+ PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d",
__LINE__, deq_pkts, tx_port);
rte_event_dev_dump(evdev, stdout);
return 1;
@@ -339,7 +339,7 @@ atomic_basic(struct test *t)
if (init(t, 2, tx_port+1) < 0 ||
create_ports(t, tx_port+1) < 0 ||
create_queues_type(t, 2, OPDL_Q_TYPE_ATOMIC)) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -364,7 +364,7 @@ atomic_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[i], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n",
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid",
__LINE__);
cleanup(t);
return -1;
@@ -374,13 +374,13 @@ atomic_basic(struct test *t)
err = rte_event_port_link(evdev, t->port[tx_port], &t->qid[1], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping TX qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping TX qid", __LINE__);
cleanup(t);
return -1;
}
if (rte_event_dev_start(evdev) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error with start call\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error with start call", __LINE__);
return -1;
}
@@ -389,7 +389,7 @@ atomic_basic(struct test *t)
struct rte_event ev;
mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
if (!mbufs[i]) {
- PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
return -1;
}
@@ -402,7 +402,7 @@ atomic_basic(struct test *t)
/* generate pkt and enqueue */
err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
__LINE__, i, err);
return -1;
}
@@ -419,7 +419,7 @@ atomic_basic(struct test *t)
if (t->port[i] != 2) {
if (deq_pkts != 0) {
- PMD_DRV_LOG(ERR, "%d: deq none zero !\n",
+ PMD_DRV_LOG(ERR, "%d: deq none zero !",
__LINE__);
rte_event_dev_dump(evdev, stdout);
return -1;
@@ -427,7 +427,7 @@ atomic_basic(struct test *t)
} else {
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: deq not eqal to 3 %u !\n",
+ PMD_DRV_LOG(ERR, "%d: deq not eqal to 3 %u !",
__LINE__, deq_pkts);
rte_event_dev_dump(evdev, stdout);
return -1;
@@ -444,7 +444,7 @@ atomic_basic(struct test *t)
if (err != 3) {
PMD_DRV_LOG(ERR, "port %d: Failed to enqueue pkt %u, "
- "retval = %u\n",
+ "retval = %u",
t->port[i], 3, err);
return -1;
}
@@ -460,7 +460,7 @@ atomic_basic(struct test *t)
/* Check to see if we've got all 3 packets */
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d\n",
+ PMD_DRV_LOG(ERR, "%d: expected 3 pkts at tx port got %d from port %d",
__LINE__, deq_pkts, tx_port);
rte_event_dev_dump(evdev, stdout);
return 1;
@@ -568,7 +568,7 @@ single_link_w_stats(struct test *t)
create_ports(t, 3) < 0 || /* 0,1,2 */
create_queues_type(t, 1, OPDL_Q_TYPE_SINGLE_LINK) < 0 ||
create_queues_type(t, 1, OPDL_Q_TYPE_ORDERED) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -587,7 +587,7 @@ single_link_w_stats(struct test *t)
err = rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]\n",
+ PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]",
__LINE__,
t->port[1],
t->qid[0]);
@@ -598,7 +598,7 @@ single_link_w_stats(struct test *t)
err = rte_event_port_link(evdev, t->port[2], &t->qid[1], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]\n",
+ PMD_DRV_LOG(ERR, "%d: error linking port:[%u] to queue:[%u]",
__LINE__,
t->port[2],
t->qid[1]);
@@ -607,7 +607,7 @@ single_link_w_stats(struct test *t)
}
if (rte_event_dev_start(evdev) != 0) {
- PMD_DRV_LOG(ERR, "%d: failed to start device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: failed to start device", __LINE__);
cleanup(t);
return -1;
}
@@ -619,7 +619,7 @@ single_link_w_stats(struct test *t)
struct rte_event ev;
mbufs[i] = rte_gen_arp(0, t->mbuf_pool);
if (!mbufs[i]) {
- PMD_DRV_LOG(ERR, "%d: gen of pkt failed\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: gen of pkt failed", __LINE__);
return -1;
}
@@ -631,7 +631,7 @@ single_link_w_stats(struct test *t)
/* generate pkt and enqueue */
err = rte_event_enqueue_burst(evdev, t->port[rx_port], &ev, 1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u\n",
+ PMD_DRV_LOG(ERR, "%d: Failed to enqueue pkt %u, retval = %u",
__LINE__,
t->port[rx_port],
err);
@@ -647,7 +647,7 @@ single_link_w_stats(struct test *t)
deq_ev, 3, 0);
if (deq_pkts != 3) {
- PMD_DRV_LOG(ERR, "%d: deq not 3 !\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: deq not 3 !", __LINE__);
cleanup(t);
return -1;
}
@@ -662,7 +662,7 @@ single_link_w_stats(struct test *t)
NEW_NUM_PACKETS);
if (deq_pkts != 2) {
- PMD_DRV_LOG(ERR, "%d: enq not 2 but %u!\n", __LINE__, deq_pkts);
+ PMD_DRV_LOG(ERR, "%d: enq not 2 but %u!", __LINE__, deq_pkts);
cleanup(t);
return -1;
}
@@ -676,7 +676,7 @@ single_link_w_stats(struct test *t)
/* Check to see if we've got all 2 packets */
if (deq_pkts != 2) {
- PMD_DRV_LOG(ERR, "%d: expected 2 pkts at tx port got %d from port %d\n",
+ PMD_DRV_LOG(ERR, "%d: expected 2 pkts at tx port got %d from port %d",
__LINE__, deq_pkts, tx_port);
cleanup(t);
return -1;
@@ -706,7 +706,7 @@ single_link(struct test *t)
create_ports(t, 3) < 0 || /* 0,1,2 */
create_queues_type(t, 1, OPDL_Q_TYPE_SINGLE_LINK) < 0 ||
create_queues_type(t, 1, OPDL_Q_TYPE_ORDERED) < 0) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -725,7 +725,7 @@ single_link(struct test *t)
err = rte_event_port_link(evdev, t->port[1], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid", __LINE__);
cleanup(t);
return -1;
}
@@ -733,14 +733,14 @@ single_link(struct test *t)
err = rte_event_port_link(evdev, t->port[2], &t->qid[0], NULL,
1);
if (err != 1) {
- PMD_DRV_LOG(ERR, "%d: error mapping lb qid\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: error mapping lb qid", __LINE__);
cleanup(t);
return -1;
}
if (rte_event_dev_start(evdev) == 0) {
PMD_DRV_LOG(ERR, "%d: start DIDN'T FAIL with more than 1 "
- "SINGLE_LINK PORT\n", __LINE__);
+ "SINGLE_LINK PORT", __LINE__);
cleanup(t);
return -1;
}
@@ -789,7 +789,7 @@ qid_basic(struct test *t)
if (init(t, NUM_QUEUES, NUM_QUEUES+1) < 0 ||
create_ports(t, NUM_QUEUES+1) < 0 ||
create_queues_type(t, NUM_QUEUES, OPDL_Q_TYPE_ORDERED)) {
- PMD_DRV_LOG(ERR, "%d: Error initializing device\n", __LINE__);
+ PMD_DRV_LOG(ERR, "%d: Error initializing device", __LINE__);
return -1;
}
@@ -805,7 +805,7 @@ qid_basic(struct test *t)
if (nb_linked != 1) {
- PMD_DRV_LOG(ERR, "%s:%d: error mapping port:%u to queue:%u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: error mapping port:%u to queue:%u",
__FILE__,
__LINE__,
i + 1,
@@ -826,7 +826,7 @@ qid_basic(struct test *t)
&t_qid,
NULL,
1) > 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Second call to port link on same port DID NOT fail\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Second call to port link on same port DID NOT fail",
__FILE__,
__LINE__);
err = -1;
@@ -841,7 +841,7 @@ qid_basic(struct test *t)
BATCH_SIZE,
0);
if (test_num_events != 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing 0 packets from port %u on stopped device\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing 0 packets from port %u on stopped device",
__FILE__,
__LINE__,
p_id);
@@ -855,7 +855,7 @@ qid_basic(struct test *t)
ev,
BATCH_SIZE);
if (test_num_events != 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing 0 packets to port %u on stopped device\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing 0 packets to port %u on stopped device",
__FILE__,
__LINE__,
p_id);
@@ -868,7 +868,7 @@ qid_basic(struct test *t)
/* Start the device */
if (!err) {
if (rte_event_dev_start(evdev) < 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Error with start call\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error with start call",
__FILE__,
__LINE__);
err = -1;
@@ -884,7 +884,7 @@ qid_basic(struct test *t)
&t_qid,
NULL,
1) > 0) {
- PMD_DRV_LOG(ERR, "%s:%d: Call to port link on started device DID NOT fail\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Call to port link on started device DID NOT fail",
__FILE__,
__LINE__);
err = -1;
@@ -904,7 +904,7 @@ qid_basic(struct test *t)
ev,
BATCH_SIZE);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing rx packets\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing rx packets",
__FILE__,
__LINE__);
err = -1;
@@ -921,7 +921,7 @@ qid_basic(struct test *t)
0);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from port %u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from port %u",
__FILE__,
__LINE__,
p_id);
@@ -930,7 +930,7 @@ qid_basic(struct test *t)
}
if (ev[0].queue_id != q_id) {
- PMD_DRV_LOG(ERR, "%s:%d: Error event portid[%u] q_id:[%u] does not match expected:[%u]\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error event portid[%u] q_id:[%u] does not match expected:[%u]",
__FILE__,
__LINE__,
p_id,
@@ -949,7 +949,7 @@ qid_basic(struct test *t)
ev,
BATCH_SIZE);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing packets from port:%u to queue:%u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error enqueuing packets from port:%u to queue:%u",
__FILE__,
__LINE__,
p_id,
@@ -967,7 +967,7 @@ qid_basic(struct test *t)
BATCH_SIZE,
0);
if (num_events != BATCH_SIZE) {
- PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from tx port %u\n",
+ PMD_DRV_LOG(ERR, "%s:%d: Error dequeuing packets from tx port %u",
__FILE__,
__LINE__,
p_id);
@@ -993,17 +993,17 @@ opdl_selftest(void)
evdev = rte_event_dev_get_dev_id(eventdev_name);
if (evdev < 0) {
- PMD_DRV_LOG(ERR, "%d: Eventdev %s not found - creating.\n",
+ PMD_DRV_LOG(ERR, "%d: Eventdev %s not found - creating.",
__LINE__, eventdev_name);
/* turn on stats by default */
if (rte_vdev_init(eventdev_name, "do_validation=1") < 0) {
- PMD_DRV_LOG(ERR, "Error creating eventdev\n");
+ PMD_DRV_LOG(ERR, "Error creating eventdev");
free(t);
return -1;
}
evdev = rte_event_dev_get_dev_id(eventdev_name);
if (evdev < 0) {
- PMD_DRV_LOG(ERR, "Error finding newly created eventdev\n");
+ PMD_DRV_LOG(ERR, "Error finding newly created eventdev");
free(t);
return -1;
}
@@ -1019,27 +1019,27 @@ opdl_selftest(void)
512, /* use very small mbufs */
rte_socket_id());
if (!eventdev_func_mempool) {
- PMD_DRV_LOG(ERR, "ERROR creating mempool\n");
+ PMD_DRV_LOG(ERR, "ERROR creating mempool");
free(t);
return -1;
}
}
t->mbuf_pool = eventdev_func_mempool;
- PMD_DRV_LOG(ERR, "*** Running Ordered Basic test...\n");
+ PMD_DRV_LOG(ERR, "*** Running Ordered Basic test...");
ret = ordered_basic(t);
- PMD_DRV_LOG(ERR, "*** Running Atomic Basic test...\n");
+ PMD_DRV_LOG(ERR, "*** Running Atomic Basic test...");
ret = atomic_basic(t);
- PMD_DRV_LOG(ERR, "*** Running QID Basic test...\n");
+ PMD_DRV_LOG(ERR, "*** Running QID Basic test...");
ret = qid_basic(t);
- PMD_DRV_LOG(ERR, "*** Running SINGLE LINK failure test...\n");
+ PMD_DRV_LOG(ERR, "*** Running SINGLE LINK failure test...");
ret = single_link(t);
- PMD_DRV_LOG(ERR, "*** Running SINGLE LINK w stats test...\n");
+ PMD_DRV_LOG(ERR, "*** Running SINGLE LINK w stats test...");
ret = single_link_w_stats(t);
/*
diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c
index 2096496917..babe77a20f 100644
--- a/drivers/event/sw/sw_evdev.c
+++ b/drivers/event/sw/sw_evdev.c
@@ -173,7 +173,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
dev->data->socket_id,
RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->rx_worker_ring == NULL) {
- SW_LOG_ERR("Error creating RX worker ring for port %d\n",
+ SW_LOG_ERR("Error creating RX worker ring for port %d",
port_id);
return -1;
}
@@ -193,7 +193,7 @@ sw_port_setup(struct rte_eventdev *dev, uint8_t port_id,
RING_F_SP_ENQ | RING_F_SC_DEQ | RING_F_EXACT_SZ);
if (p->cq_worker_ring == NULL) {
rte_event_ring_free(p->rx_worker_ring);
- SW_LOG_ERR("Error creating CQ worker ring for port %d\n",
+ SW_LOG_ERR("Error creating CQ worker ring for port %d",
port_id);
return -1;
}
@@ -253,7 +253,7 @@ qid_init(struct sw_evdev *sw, unsigned int idx, int type,
if (!window_size) {
SW_LOG_DBG(
- "invalid reorder_window_size for ordered queue\n"
+ "invalid reorder_window_size for ordered queue"
);
goto cleanup;
}
@@ -262,7 +262,7 @@ qid_init(struct sw_evdev *sw, unsigned int idx, int type,
window_size * sizeof(qid->reorder_buffer[0]),
0, socket_id);
if (!qid->reorder_buffer) {
- SW_LOG_DBG("reorder_buffer malloc failed\n");
+ SW_LOG_DBG("reorder_buffer malloc failed");
goto cleanup;
}
@@ -334,7 +334,7 @@ sw_queue_setup(struct rte_eventdev *dev, uint8_t queue_id,
type = SW_SCHED_TYPE_DIRECT;
} else if (RTE_EVENT_QUEUE_CFG_ALL_TYPES
& conf->event_queue_cfg) {
- SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported\n");
+ SW_LOG_ERR("QUEUE_CFG_ALL_TYPES not supported");
return -ENOTSUP;
}
@@ -769,7 +769,7 @@ sw_start(struct rte_eventdev *dev)
/* check a service core is mapped to this service */
if (!rte_service_runstate_get(sw->service_id)) {
- SW_LOG_ERR("Warning: No Service core enabled on service %s\n",
+ SW_LOG_ERR("Warning: No Service core enabled on service %s",
sw->service_name);
return -ENOENT;
}
@@ -777,7 +777,7 @@ sw_start(struct rte_eventdev *dev)
/* check all ports are set up */
for (i = 0; i < sw->port_count; i++)
if (sw->ports[i].rx_worker_ring == NULL) {
- SW_LOG_ERR("Port %d not configured\n", i);
+ SW_LOG_ERR("Port %d not configured", i);
return -ESTALE;
}
@@ -785,7 +785,7 @@ sw_start(struct rte_eventdev *dev)
for (i = 0; i < sw->qid_count; i++)
if (!sw->qids[i].initialized ||
sw->qids[i].cq_num_mapped_cqs == 0) {
- SW_LOG_ERR("Queue %d not configured\n", i);
+ SW_LOG_ERR("Queue %d not configured", i);
return -ENOLINK;
}
@@ -997,7 +997,7 @@ sw_probe(struct rte_vdev_device *vdev)
if (!kvlist) {
SW_LOG_INFO(
- "Ignoring unsupported parameters when creating device '%s'\n",
+ "Ignoring unsupported parameters when creating device '%s'",
name);
} else {
int ret = rte_kvargs_process(kvlist, NUMA_NODE_ARG,
@@ -1067,7 +1067,7 @@ sw_probe(struct rte_vdev_device *vdev)
SW_LOG_INFO(
"Creating eventdev sw device %s, numa_node=%d, "
"sched_quanta=%d, credit_quanta=%d "
- "min_burst=%d, deq_burst=%d, refill_once=%d\n",
+ "min_burst=%d, deq_burst=%d, refill_once=%d",
name, socket_id, sched_quanta, credit_quanta,
min_burst_size, deq_burst_size, refill_once);
@@ -1131,7 +1131,7 @@ sw_remove(struct rte_vdev_device *vdev)
if (name == NULL)
return -EINVAL;
- SW_LOG_INFO("Closing eventdev sw device %s\n", name);
+ SW_LOG_INFO("Closing eventdev sw device %s", name);
return rte_event_pmd_vdev_uninit(name);
}
diff --git a/drivers/event/sw/sw_evdev_xstats.c b/drivers/event/sw/sw_evdev_xstats.c
index fbac8f3ab5..076b982ab8 100644
--- a/drivers/event/sw/sw_evdev_xstats.c
+++ b/drivers/event/sw/sw_evdev_xstats.c
@@ -419,7 +419,7 @@ sw_xstats_get_names(const struct rte_eventdev *dev,
start_offset = sw->xstats_offset_for_qid[queue_port_id];
break;
default:
- SW_LOG_ERR("Invalid mode received in sw_xstats_get_names()\n");
+ SW_LOG_ERR("Invalid mode received in sw_xstats_get_names()");
return -EINVAL;
};
@@ -470,7 +470,7 @@ sw_xstats_update(struct sw_evdev *sw, enum rte_event_dev_xstats_mode mode,
xstats_mode_count = sw->xstats_count_per_qid[queue_port_id];
break;
default:
- SW_LOG_ERR("Invalid mode received in sw_xstats_get()\n");
+ SW_LOG_ERR("Invalid mode received in sw_xstats_get()");
goto invalid_value;
};
diff --git a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
index 84371d5d1a..b0c6d153e4 100644
--- a/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
+++ b/drivers/mempool/dpaa2/dpaa2_hw_mempool.c
@@ -67,7 +67,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_MEMPOOL_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
goto err1;
}
@@ -198,7 +198,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
ret = dpaa2_affine_qbman_swp();
if (ret != 0) {
DPAA2_MEMPOOL_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return;
}
@@ -342,7 +342,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
ret = dpaa2_affine_qbman_swp();
if (ret != 0) {
DPAA2_MEMPOOL_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return ret;
}
@@ -457,7 +457,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
msl = rte_mem_virt2memseg_list(vaddr);
if (!msl) {
- DPAA2_MEMPOOL_DEBUG("Memsegment is External.\n");
+ DPAA2_MEMPOOL_DEBUG("Memsegment is External.");
rte_fslmc_vfio_mem_dmamap((size_t)vaddr,
(size_t)paddr, (size_t)len);
}
diff --git a/drivers/mempool/octeontx/octeontx_fpavf.c b/drivers/mempool/octeontx/octeontx_fpavf.c
index 1513c632c6..966fee8bfe 100644
--- a/drivers/mempool/octeontx/octeontx_fpavf.c
+++ b/drivers/mempool/octeontx/octeontx_fpavf.c
@@ -134,7 +134,7 @@ octeontx_fpa_gpool_alloc(unsigned int object_size)
if (res->sz128 == 0) {
res->sz128 = sz128;
- fpavf_log_dbg("gpool %d blk_sz %d\n", res->vf_id,
+ fpavf_log_dbg("gpool %d blk_sz %d", res->vf_id,
sz128);
return res->vf_id;
@@ -273,7 +273,7 @@ octeontx_fpapf_pool_setup(unsigned int gpool, unsigned int buf_size,
goto err;
}
- fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64 "\n",
+ fpavf_log_dbg(" vfid %d gpool %d aid %d pool_cfg 0x%x pool_stack_base %" PRIx64 " pool_stack_end %" PRIx64" aura_cfg %" PRIx64,
fpa->vf_id, gpool, cfg.aid, (unsigned int)cfg.pool_cfg,
cfg.pool_stack_base, cfg.pool_stack_end, cfg.aura_cfg);
@@ -351,8 +351,7 @@ octeontx_fpapf_aura_attach(unsigned int gpool_index)
sizeof(struct octeontx_mbox_fpa_cfg),
&resp, sizeof(resp));
if (ret < 0) {
- fpavf_log_err("Could not attach fpa ");
- fpavf_log_err("aura %d to pool %d. Err=%d. FuncErr=%d\n",
+ fpavf_log_err("Could not attach fpa aura %d to pool %d. Err=%d. FuncErr=%d",
FPA_AURA_IDX(gpool_index), gpool_index, ret,
hdr.res_code);
ret = -EACCES;
@@ -380,7 +379,7 @@ octeontx_fpapf_aura_detach(unsigned int gpool_index)
hdr.vfid = gpool_index;
ret = octeontx_mbox_send(&hdr, &cfg, sizeof(cfg), NULL, 0);
if (ret < 0) {
- fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d\n",
+ fpavf_log_err("Couldn't detach FPA aura %d Err=%d FuncErr=%d",
FPA_AURA_IDX(gpool_index), ret,
hdr.res_code);
ret = -EINVAL;
@@ -428,8 +427,7 @@ octeontx_fpapf_start_count(uint16_t gpool_index)
hdr.vfid = gpool_index;
ret = octeontx_mbox_send(&hdr, NULL, 0, NULL, 0);
if (ret < 0) {
- fpavf_log_err("Could not start buffer counting for ");
- fpavf_log_err("FPA pool %d. Err=%d. FuncErr=%d\n",
+ fpavf_log_err("Could not start buffer counting for FPA pool %d. Err=%d. FuncErr=%d",
gpool_index, ret, hdr.res_code);
ret = -EINVAL;
goto err;
@@ -636,7 +634,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
cnt = fpavf_read64((void *)((uintptr_t)pool_bar +
FPA_VF_VHAURA_CNT(gaura)));
if (cnt) {
- fpavf_log_dbg("buffer exist in pool cnt %" PRId64 "\n", cnt);
+ fpavf_log_dbg("buffer exist in pool cnt %" PRId64, cnt);
return -EBUSY;
}
@@ -664,7 +662,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
(pool_bar + FPA_VF_VHAURA_OP_ALLOC(gaura)));
if (node == NULL) {
- fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf\n",
+ fpavf_log_err("GAURA[%u] missing %" PRIx64 " buf",
gaura, avail);
break;
}
@@ -684,7 +682,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
curr = curr[0]) {
if (curr == curr[0] ||
((uintptr_t)curr != ((uintptr_t)curr[0] - sz))) {
- fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)\n",
+ fpavf_log_err("POOL# %u buf sequence err (%p vs. %p)",
gpool, curr, curr[0]);
}
}
@@ -705,7 +703,7 @@ octeontx_fpa_bufpool_destroy(uintptr_t handle, int node_id)
ret = octeontx_fpapf_aura_detach(gpool);
if (ret) {
- fpavf_log_err("Failed to detach gaura %u. error code=%d\n",
+ fpavf_log_err("Failed to detach gaura %u. error code=%d",
gpool, ret);
}
@@ -757,7 +755,7 @@ octeontx_fpavf_identify(void *bar0)
stack_ln_ptr = fpavf_read64((void *)((uintptr_t)bar0 +
FPA_VF_VHPOOL_THRESHOLD(0)));
if (vf_idx >= FPA_VF_MAX) {
- fpavf_log_err("vf_id(%d) greater than max vf (32)\n", vf_id);
+ fpavf_log_err("vf_id(%d) greater than max vf (32)", vf_id);
return -E2BIG;
}
diff --git a/drivers/mempool/octeontx/rte_mempool_octeontx.c b/drivers/mempool/octeontx/rte_mempool_octeontx.c
index f4de1c8412..631e521b58 100644
--- a/drivers/mempool/octeontx/rte_mempool_octeontx.c
+++ b/drivers/mempool/octeontx/rte_mempool_octeontx.c
@@ -27,11 +27,11 @@ octeontx_fpavf_alloc(struct rte_mempool *mp)
goto _end;
if ((uint32_t)rc != object_size)
- fpavf_log_err("buffer size mismatch: %d instead of %u\n",
+ fpavf_log_err("buffer size mismatch: %d instead of %u",
rc, object_size);
- fpavf_log_info("Pool created %p with .. ", (void *)pool);
- fpavf_log_info("obj_sz %d, cnt %d\n", object_size, memseg_count);
+ fpavf_log_info("Pool created %p with .. obj_sz %d, cnt %d",
+ (void *)pool, object_size, memseg_count);
/* assign pool handle to mempool */
mp->pool_id = (uint64_t)pool;
diff --git a/drivers/ml/cnxk/cn10k_ml_dev.c b/drivers/ml/cnxk/cn10k_ml_dev.c
index 41f3b7a95d..3c328d9d0e 100644
--- a/drivers/ml/cnxk/cn10k_ml_dev.c
+++ b/drivers/ml/cnxk/cn10k_ml_dev.c
@@ -108,14 +108,14 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
kvlist = rte_kvargs_parse(devargs->args, valid_args);
if (kvlist == NULL) {
- plt_err("Error parsing devargs\n");
+ plt_err("Error parsing devargs");
return -EINVAL;
}
if (rte_kvargs_count(kvlist, CN10K_ML_FW_PATH) == 1) {
ret = rte_kvargs_process(kvlist, CN10K_ML_FW_PATH, &parse_string_arg, &fw_path);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n", CN10K_ML_FW_PATH);
+ plt_err("Error processing arguments, key = %s", CN10K_ML_FW_PATH);
ret = -EINVAL;
goto exit;
}
@@ -126,7 +126,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_FW_ENABLE_DPE_WARNINGS,
&parse_integer_arg, &cn10k_mldev->fw.enable_dpe_warnings);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_FW_ENABLE_DPE_WARNINGS);
ret = -EINVAL;
goto exit;
@@ -138,7 +138,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_FW_REPORT_DPE_WARNINGS,
&parse_integer_arg, &cn10k_mldev->fw.report_dpe_warnings);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_FW_REPORT_DPE_WARNINGS);
ret = -EINVAL;
goto exit;
@@ -150,7 +150,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_CACHE_MODEL_DATA, &parse_integer_arg,
&cn10k_mldev->cache_model_data);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_DEV_CACHE_MODEL_DATA);
ret = -EINVAL;
goto exit;
@@ -162,7 +162,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_ALLOC_MODE, &parse_string_arg,
&ocm_alloc_mode);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_ALLOC_MODE);
+ plt_err("Error processing arguments, key = %s", CN10K_ML_OCM_ALLOC_MODE);
ret = -EINVAL;
goto exit;
}
@@ -173,7 +173,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_DEV_HW_QUEUE_LOCK, &parse_integer_arg,
&cn10k_mldev->hw_queue_lock);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n",
+ plt_err("Error processing arguments, key = %s",
CN10K_ML_DEV_HW_QUEUE_LOCK);
ret = -EINVAL;
goto exit;
@@ -185,7 +185,7 @@ cn10k_mldev_parse_devargs(struct rte_devargs *devargs, struct cn10k_ml_dev *cn10
ret = rte_kvargs_process(kvlist, CN10K_ML_OCM_PAGE_SIZE, &parse_integer_arg,
&cn10k_mldev->ocm_page_size);
if (ret < 0) {
- plt_err("Error processing arguments, key = %s\n", CN10K_ML_OCM_PAGE_SIZE);
+ plt_err("Error processing arguments, key = %s", CN10K_ML_OCM_PAGE_SIZE);
ret = -EINVAL;
goto exit;
}
@@ -204,7 +204,7 @@ check_args:
} else {
if ((cn10k_mldev->fw.enable_dpe_warnings < 0) ||
(cn10k_mldev->fw.enable_dpe_warnings > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_ENABLE_DPE_WARNINGS,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_FW_ENABLE_DPE_WARNINGS,
cn10k_mldev->fw.enable_dpe_warnings);
ret = -EINVAL;
goto exit;
@@ -218,7 +218,7 @@ check_args:
} else {
if ((cn10k_mldev->fw.report_dpe_warnings < 0) ||
(cn10k_mldev->fw.report_dpe_warnings > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_FW_REPORT_DPE_WARNINGS,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_FW_REPORT_DPE_WARNINGS,
cn10k_mldev->fw.report_dpe_warnings);
ret = -EINVAL;
goto exit;
@@ -231,7 +231,7 @@ check_args:
cn10k_mldev->cache_model_data = CN10K_ML_DEV_CACHE_MODEL_DATA_DEFAULT;
} else {
if ((cn10k_mldev->cache_model_data < 0) || (cn10k_mldev->cache_model_data > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_CACHE_MODEL_DATA,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_DEV_CACHE_MODEL_DATA,
cn10k_mldev->cache_model_data);
ret = -EINVAL;
goto exit;
@@ -244,7 +244,7 @@ check_args:
} else {
if (!((strcmp(ocm_alloc_mode, "lowest") == 0) ||
(strcmp(ocm_alloc_mode, "largest") == 0))) {
- plt_err("Invalid argument, %s = %s\n", CN10K_ML_OCM_ALLOC_MODE,
+ plt_err("Invalid argument, %s = %s", CN10K_ML_OCM_ALLOC_MODE,
ocm_alloc_mode);
ret = -EINVAL;
goto exit;
@@ -257,7 +257,7 @@ check_args:
cn10k_mldev->hw_queue_lock = CN10K_ML_DEV_HW_QUEUE_LOCK_DEFAULT;
} else {
if ((cn10k_mldev->hw_queue_lock < 0) || (cn10k_mldev->hw_queue_lock > 1)) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_DEV_HW_QUEUE_LOCK,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_DEV_HW_QUEUE_LOCK,
cn10k_mldev->hw_queue_lock);
ret = -EINVAL;
goto exit;
@@ -269,7 +269,7 @@ check_args:
cn10k_mldev->ocm_page_size = CN10K_ML_OCM_PAGE_SIZE_DEFAULT;
} else {
if (cn10k_mldev->ocm_page_size < 0) {
- plt_err("Invalid argument, %s = %d\n", CN10K_ML_OCM_PAGE_SIZE,
+ plt_err("Invalid argument, %s = %d", CN10K_ML_OCM_PAGE_SIZE,
cn10k_mldev->ocm_page_size);
ret = -EINVAL;
goto exit;
@@ -284,7 +284,7 @@ check_args:
}
if (!found) {
- plt_err("Unsupported ocm_page_size = %d\n", cn10k_mldev->ocm_page_size);
+ plt_err("Unsupported ocm_page_size = %d", cn10k_mldev->ocm_page_size);
ret = -EINVAL;
goto exit;
}
@@ -773,7 +773,7 @@ cn10k_ml_fw_load(struct cnxk_ml_dev *cnxk_mldev)
/* Read firmware image to a buffer */
ret = rte_firmware_read(fw->path, &fw_buffer, &fw_size);
if ((ret < 0) || (fw_buffer == NULL)) {
- plt_err("Unable to read firmware data: %s\n", fw->path);
+ plt_err("Unable to read firmware data: %s", fw->path);
return ret;
}
diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c
index 971362b242..7bd73727e1 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.c
+++ b/drivers/ml/cnxk/cnxk_ml_ops.c
@@ -437,7 +437,7 @@ cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id,
model = cnxk_mldev->mldev->data->models[model_id];
if (model == NULL) {
- plt_err("Invalid model_id = %d\n", model_id);
+ plt_err("Invalid model_id = %d", model_id);
return -EINVAL;
}
}
@@ -454,7 +454,7 @@ cnxk_ml_model_xstats_reset(struct cnxk_ml_dev *cnxk_mldev, int32_t model_id,
} else {
for (j = 0; j < nb_ids; j++) {
if (stat_ids[j] < start_id || stat_ids[j] > end_id) {
- plt_err("Invalid stat_ids[%d] = %d for model_id = %d\n", j,
+ plt_err("Invalid stat_ids[%d] = %d for model_id = %d", j,
stat_ids[j], lcl_model_id);
return -EINVAL;
}
@@ -510,12 +510,12 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co
cnxk_ml_dev_info_get(dev, &dev_info);
if (conf->nb_models > dev_info.max_models) {
- plt_err("Invalid device config, nb_models > %u\n", dev_info.max_models);
+ plt_err("Invalid device config, nb_models > %u", dev_info.max_models);
return -EINVAL;
}
if (conf->nb_queue_pairs > dev_info.max_queue_pairs) {
- plt_err("Invalid device config, nb_queue_pairs > %u\n", dev_info.max_queue_pairs);
+ plt_err("Invalid device config, nb_queue_pairs > %u", dev_info.max_queue_pairs);
return -EINVAL;
}
@@ -533,10 +533,10 @@ cnxk_ml_dev_configure(struct rte_ml_dev *dev, const struct rte_ml_dev_config *co
plt_ml_dbg("Re-configuring ML device, nb_queue_pairs = %u, nb_models = %u",
conf->nb_queue_pairs, conf->nb_models);
} else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_STARTED) {
- plt_err("Device can't be reconfigured in started state\n");
+ plt_err("Device can't be reconfigured in started state");
return -ENOTSUP;
} else if (cnxk_mldev->state == ML_CNXK_DEV_STATE_CLOSED) {
- plt_err("Device can't be reconfigured after close\n");
+ plt_err("Device can't be reconfigured after close");
return -ENOTSUP;
}
@@ -853,7 +853,7 @@ cnxk_ml_dev_queue_pair_setup(struct rte_ml_dev *dev, uint16_t queue_pair_id,
uint32_t nb_desc;
if (queue_pair_id >= dev->data->nb_queue_pairs) {
- plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)\n", queue_pair_id,
+ plt_err("Queue-pair id = %u (>= max queue pairs supported, %u)", queue_pair_id,
dev->data->nb_queue_pairs);
return -EINVAL;
}
@@ -1249,11 +1249,11 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u
}
if ((total_wb_pages + max_scratch_pages) > ocm->num_pages) {
- plt_err("model_id = %u: total_wb_pages (%u) + scratch_pages (%u) > %u\n",
+ plt_err("model_id = %u: total_wb_pages (%u) + scratch_pages (%u) > %u",
lcl_model_id, total_wb_pages, max_scratch_pages, ocm->num_pages);
if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
- plt_ml_dbg("layer_id = %u: wb_pages = %u, scratch_pages = %u\n", layer_id,
+ plt_ml_dbg("layer_id = %u: wb_pages = %u, scratch_pages = %u", layer_id,
model->layer[layer_id].glow.ocm_map.wb_pages,
model->layer[layer_id].glow.ocm_map.scratch_pages);
#ifdef RTE_MLDEV_CNXK_ENABLE_MVTVM
@@ -1262,7 +1262,7 @@ cnxk_ml_model_load(struct rte_ml_dev *dev, struct rte_ml_model_params *params, u
layer_id++) {
if (model->layer[layer_id].type == ML_CNXK_LAYER_TYPE_MRVL) {
plt_ml_dbg(
- "layer_id = %u: wb_pages = %u, scratch_pages = %u\n",
+ "layer_id = %u: wb_pages = %u, scratch_pages = %u",
layer_id,
model->layer[layer_id].glow.ocm_map.wb_pages,
model->layer[layer_id].glow.ocm_map.scratch_pages);
diff --git a/drivers/net/atlantic/atl_rxtx.c b/drivers/net/atlantic/atl_rxtx.c
index cb6f8141a8..0f367faad5 100644
--- a/drivers/net/atlantic/atl_rxtx.c
+++ b/drivers/net/atlantic/atl_rxtx.c
@@ -359,13 +359,13 @@ atl_rx_init(struct rte_eth_dev *eth_dev)
buff_size = RTE_ALIGN_FLOOR(buff_size, 1024);
if (buff_size > HW_ATL_B0_RXD_BUF_SIZE_MAX) {
PMD_INIT_LOG(WARNING,
- "Port %d queue %d: mem pool buff size is too big\n",
+ "Port %d queue %d: mem pool buff size is too big",
rxq->port_id, rxq->queue_id);
buff_size = HW_ATL_B0_RXD_BUF_SIZE_MAX;
}
if (buff_size < 1024) {
PMD_INIT_LOG(ERR,
- "Port %d queue %d: mem pool buff size is too small\n",
+ "Port %d queue %d: mem pool buff size is too small",
rxq->port_id, rxq->queue_id);
return -EINVAL;
}
diff --git a/drivers/net/atlantic/hw_atl/hw_atl_utils.c b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
index 84d11ab3a5..06d79115b9 100644
--- a/drivers/net/atlantic/hw_atl/hw_atl_utils.c
+++ b/drivers/net/atlantic/hw_atl/hw_atl_utils.c
@@ -76,7 +76,7 @@ int hw_atl_utils_initfw(struct aq_hw_s *self, const struct aq_fw_ops **fw_ops)
self->fw_ver_actual) == 0) {
*fw_ops = &aq_fw_2x_ops;
} else {
- PMD_DRV_LOG(ERR, "Bad FW version detected: %x\n",
+ PMD_DRV_LOG(ERR, "Bad FW version detected: %x",
self->fw_ver_actual);
return -EOPNOTSUPP;
}
@@ -124,7 +124,7 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
AQ_HW_SLEEP(10);
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "MAC kickstart failed\n");
+ PMD_DRV_LOG(ERR, "MAC kickstart failed");
return -EIO;
}
@@ -152,7 +152,7 @@ static int hw_atl_utils_soft_reset_flb(struct aq_hw_s *self)
AQ_HW_SLEEP(10);
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "FW kickstart failed\n");
+ PMD_DRV_LOG(ERR, "FW kickstart failed");
return -EIO;
}
/* Old FW requires fixed delay after init */
@@ -209,7 +209,7 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
aq_hw_write_reg(self, 0x534, 0xA0);
if (rbl_status == 0xF1A7) {
- PMD_DRV_LOG(ERR, "No FW detected. Dynamic FW load not implemented\n");
+ PMD_DRV_LOG(ERR, "No FW detected. Dynamic FW load not implemented");
return -EOPNOTSUPP;
}
@@ -221,7 +221,7 @@ static int hw_atl_utils_soft_reset_rbl(struct aq_hw_s *self)
AQ_HW_SLEEP(10);
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "FW kickstart failed\n");
+ PMD_DRV_LOG(ERR, "FW kickstart failed");
return -EIO;
}
/* Old FW requires fixed delay after init */
@@ -246,7 +246,7 @@ int hw_atl_utils_soft_reset(struct aq_hw_s *self)
}
if (k == 1000) {
- PMD_DRV_LOG(ERR, "Neither RBL nor FLB firmware started\n");
+ PMD_DRV_LOG(ERR, "Neither RBL nor FLB firmware started");
return -EOPNOTSUPP;
}
diff --git a/drivers/net/axgbe/axgbe_ethdev.c b/drivers/net/axgbe/axgbe_ethdev.c
index 6ce87f83f4..da45ebf45f 100644
--- a/drivers/net/axgbe/axgbe_ethdev.c
+++ b/drivers/net/axgbe/axgbe_ethdev.c
@@ -1352,7 +1352,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
tc_num = pdata->pfc_map[pfc_conf->priority];
if (pfc_conf->priority >= pdata->hw_feat.tc_cnt) {
- PMD_INIT_LOG(ERR, "Max supported traffic class: %d\n",
+ PMD_INIT_LOG(ERR, "Max supported traffic class: %d",
pdata->hw_feat.tc_cnt);
return -EINVAL;
}
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 597ee43359..3153cc4d80 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -8124,7 +8124,7 @@ static int bnx2x_get_shmem_info(struct bnx2x_softc *sc)
val = sc->devinfo.bc_ver >> 8;
if (val < BNX2X_BC_VER) {
/* for now only warn later we might need to enforce this */
- PMD_DRV_LOG(NOTICE, sc, "This driver needs bc_ver %X but found %X, please upgrade BC\n",
+ PMD_DRV_LOG(NOTICE, sc, "This driver needs bc_ver %X but found %X, please upgrade BC",
BNX2X_BC_VER, val);
}
sc->link_params.feature_config_flags |=
@@ -9489,16 +9489,16 @@ static int bnx2x_prev_unload(struct bnx2x_softc *sc)
hw_lock_val = (REG_RD(sc, hw_lock_reg));
if (hw_lock_val) {
if (hw_lock_val & HW_LOCK_RESOURCE_NVRAM) {
- PMD_DRV_LOG(DEBUG, sc, "Releasing previously held NVRAM lock\n");
+ PMD_DRV_LOG(DEBUG, sc, "Releasing previously held NVRAM lock");
REG_WR(sc, MCP_REG_MCPR_NVM_SW_ARB,
(MCPR_NVM_SW_ARB_ARB_REQ_CLR1 << SC_PORT(sc)));
}
- PMD_DRV_LOG(DEBUG, sc, "Releasing previously held HW lock\n");
+ PMD_DRV_LOG(DEBUG, sc, "Releasing previously held HW lock");
REG_WR(sc, hw_lock_reg, 0xffffffff);
}
if (MCPR_ACCESS_LOCK_LOCK & REG_RD(sc, MCP_REG_MCPR_ACCESS_LOCK)) {
- PMD_DRV_LOG(DEBUG, sc, "Releasing previously held ALR\n");
+ PMD_DRV_LOG(DEBUG, sc, "Releasing previously held ALR");
REG_WR(sc, MCP_REG_MCPR_ACCESS_LOCK, 0);
}
diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 06c21ebe6d..3cca8a07f3 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -702,7 +702,7 @@ selection_logic(struct bond_dev_private *internals, uint16_t member_id)
ret = rte_eth_link_get_nowait(members[i], &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Member (port %u) link get failed: %s\n",
+ "Member (port %u) link get failed: %s",
members[i], rte_strerror(-ret));
continue;
}
@@ -879,7 +879,7 @@ bond_mode_8023ad_periodic_cb(void *arg)
ret = rte_eth_link_get_nowait(member_id, &link_info);
if (ret < 0) {
RTE_BOND_LOG(ERR,
- "Member (port %u) link get failed: %s\n",
+ "Member (port %u) link get failed: %s",
member_id, rte_strerror(-ret));
}
diff --git a/drivers/net/bonding/rte_eth_bond_alb.c b/drivers/net/bonding/rte_eth_bond_alb.c
index 56945e2349..253f38da4a 100644
--- a/drivers/net/bonding/rte_eth_bond_alb.c
+++ b/drivers/net/bonding/rte_eth_bond_alb.c
@@ -60,7 +60,7 @@ bond_mode_alb_enable(struct rte_eth_dev *bond_dev)
0, data_size, socket_id);
if (internals->mode6.mempool == NULL) {
- RTE_BOND_LOG(ERR, "%s: Failed to initialize ALB mempool.\n",
+ RTE_BOND_LOG(ERR, "%s: Failed to initialize ALB mempool.",
bond_dev->device->name);
goto mempool_alloc_error;
}
diff --git a/drivers/net/bonding/rte_eth_bond_api.c b/drivers/net/bonding/rte_eth_bond_api.c
index 99e496556a..ffc1322047 100644
--- a/drivers/net/bonding/rte_eth_bond_api.c
+++ b/drivers/net/bonding/rte_eth_bond_api.c
@@ -482,7 +482,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
ret = rte_eth_dev_info_get(member_port_id, &dev_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__, member_port_id, strerror(-ret));
return ret;
@@ -609,7 +609,7 @@ __eth_bond_member_add_lock_free(uint16_t bonding_port_id, uint16_t member_port_i
&bonding_eth_dev->data->port_id);
internals->member_count--;
RTE_BOND_LOG(ERR,
- "Member (port %u) link get failed: %s\n",
+ "Member (port %u) link get failed: %s",
member_port_id, rte_strerror(-ret));
return -1;
}
diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index c40d18d128..4144c86be4 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -191,7 +191,7 @@ bond_ethdev_8023ad_flow_verify(struct rte_eth_dev *bond_dev,
ret = rte_eth_dev_info_get(member_port, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__, member_port, strerror(-ret));
return ret;
@@ -221,7 +221,7 @@ bond_8023ad_slow_pkt_hw_filter_supported(uint16_t port_id) {
ret = rte_eth_dev_info_get(bond_dev->data->port_id, &bond_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__, bond_dev->data->port_id,
strerror(-ret));
@@ -2289,7 +2289,7 @@ bond_ethdev_info(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)
ret = rte_eth_dev_info_get(member.port_id, &member_info);
if (ret != 0) {
RTE_BOND_LOG(ERR,
- "%s: Error during getting device (port %u) info: %s\n",
+ "%s: Error during getting device (port %u) info: %s",
__func__,
member.port_id,
strerror(-ret));
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index c841b31051..60baf806ab 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -582,7 +582,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
}
if (mp == NULL || mp[0] == NULL || mp[1] == NULL) {
- plt_err("invalid memory pools\n");
+ plt_err("invalid memory pools");
return -EINVAL;
}
@@ -610,7 +610,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
return -EINVAL;
}
- plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u\n", (*spb_pool)->name,
+ plt_info("spb_pool:%s lpb_pool:%s lpb_len:%u spb_len:%u", (*spb_pool)->name,
(*lpb_pool)->name, (*lpb_pool)->elt_size, (*spb_pool)->elt_size);
return 0;
diff --git a/drivers/net/cnxk/cnxk_ethdev_mcs.c b/drivers/net/cnxk/cnxk_ethdev_mcs.c
index 06ef7c98f3..119060bcf3 100644
--- a/drivers/net/cnxk/cnxk_ethdev_mcs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_mcs.c
@@ -568,17 +568,17 @@ cnxk_eth_macsec_session_stats_get(struct cnxk_eth_dev *dev, struct cnxk_macsec_s
req.id = sess->flow_id;
req.dir = sess->dir;
roc_mcs_flowid_stats_get(mcs_dev->mdev, &req, &flow_stats);
- plt_nix_dbg("\n******* FLOW_ID IDX[%u] STATS dir: %u********\n", sess->flow_id, sess->dir);
- plt_nix_dbg("TX: tcam_hit_cnt: 0x%" PRIx64 "\n", flow_stats.tcam_hit_cnt);
+ plt_nix_dbg("******* FLOW_ID IDX[%u] STATS dir: %u********", sess->flow_id, sess->dir);
+ plt_nix_dbg("TX: tcam_hit_cnt: 0x%" PRIx64, flow_stats.tcam_hit_cnt);
req.id = mcs_dev->port_id;
req.dir = sess->dir;
roc_mcs_port_stats_get(mcs_dev->mdev, &req, &port_stats);
- plt_nix_dbg("\n********** PORT[0] STATS ****************\n");
- plt_nix_dbg("RX tcam_miss_cnt: 0x%" PRIx64 "\n", port_stats.tcam_miss_cnt);
- plt_nix_dbg("RX parser_err_cnt: 0x%" PRIx64 "\n", port_stats.parser_err_cnt);
- plt_nix_dbg("RX preempt_err_cnt: 0x%" PRIx64 "\n", port_stats.preempt_err_cnt);
- plt_nix_dbg("RX sectag_insert_err_cnt: 0x%" PRIx64 "\n", port_stats.sectag_insert_err_cnt);
+ plt_nix_dbg("********** PORT[0] STATS ****************");
+ plt_nix_dbg("RX tcam_miss_cnt: 0x%" PRIx64, port_stats.tcam_miss_cnt);
+ plt_nix_dbg("RX parser_err_cnt: 0x%" PRIx64, port_stats.parser_err_cnt);
+ plt_nix_dbg("RX preempt_err_cnt: 0x%" PRIx64, port_stats.preempt_err_cnt);
+ plt_nix_dbg("RX sectag_insert_err_cnt: 0x%" PRIx64, port_stats.sectag_insert_err_cnt);
req.id = sess->secy_id;
req.dir = sess->dir;
diff --git a/drivers/net/cnxk/cnxk_ethdev_ops.c b/drivers/net/cnxk/cnxk_ethdev_ops.c
index c8f4848f92..89e00f8fc7 100644
--- a/drivers/net/cnxk/cnxk_ethdev_ops.c
+++ b/drivers/net/cnxk/cnxk_ethdev_ops.c
@@ -528,7 +528,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
/* Wait for sq entries to be flushed */
rc = roc_nix_tm_sq_flush_spin(sq);
if (rc) {
- plt_err("Failed to drain sq, rc=%d\n", rc);
+ plt_err("Failed to drain sq, rc=%d", rc);
goto exit;
}
if (data->tx_queue_state[i] == RTE_ETH_QUEUE_STATE_STARTED) {
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 40569ddc6f..011229a470 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -2020,7 +2020,7 @@ cpfl_metadata_write_port_id(struct cpfl_itf *itf)
dev_id = cpfl_get_port_id(itf);
if (dev_id == CPFL_INVALID_HW_ID) {
- PMD_DRV_LOG(ERR, "fail to get hw ID\n");
+ PMD_DRV_LOG(ERR, "fail to get hw ID");
return false;
}
cpfl_metadata_write16(&itf->adapter->meta, type, offset, dev_id << 3);
diff --git a/drivers/net/cpfl/cpfl_fxp_rule.c b/drivers/net/cpfl/cpfl_fxp_rule.c
index be34da9fa2..42553c9641 100644
--- a/drivers/net/cpfl/cpfl_fxp_rule.c
+++ b/drivers/net/cpfl/cpfl_fxp_rule.c
@@ -77,7 +77,7 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m
if (ret && ret != CPFL_ERR_CTLQ_NO_WORK && ret != CPFL_ERR_CTLQ_ERROR &&
ret != CPFL_ERR_CTLQ_EMPTY) {
- PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x\n", ret);
+ PMD_INIT_LOG(ERR, "failed to recv ctrlq msg. err: 0x%4x", ret);
retries++;
continue;
}
@@ -108,7 +108,7 @@ cpfl_receive_ctlq_msg(struct idpf_hw *hw, struct idpf_ctlq_info *cq, u16 num_q_m
buff_cnt = dma ? 1 : 0;
ret = cpfl_vport_ctlq_post_rx_buffs(hw, cq, &buff_cnt, &dma);
if (ret)
- PMD_INIT_LOG(WARNING, "could not posted recv bufs\n");
+ PMD_INIT_LOG(WARNING, "could not posted recv bufs");
}
break;
}
@@ -131,7 +131,7 @@ cpfl_mod_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
/* prepare rule blob */
if (!dma->va) {
- PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", __func__);
+ PMD_INIT_LOG(ERR, "dma mem passed to %s is null", __func__);
return -1;
}
blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
@@ -176,7 +176,7 @@ cpfl_default_rule_pack(struct cpfl_rule_info *rinfo, struct idpf_dma_mem *dma,
uint16_t cfg_ctrl;
if (!dma->va) {
- PMD_INIT_LOG(ERR, "dma mem passed to %s is null\n", __func__);
+ PMD_INIT_LOG(ERR, "dma mem passed to %s is null", __func__);
return -1;
}
blob = (union cpfl_rule_cfg_pkt_record *)dma->va;
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index 8e610b6bba..c5b1f161fd 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -728,7 +728,7 @@ dpaa2_dev_rx_queue_setup(struct rte_eth_dev *dev,
total_nb_rx_desc += nb_rx_desc;
if (total_nb_rx_desc > MAX_NB_RX_DESC) {
- DPAA2_PMD_WARN("\nTotal nb_rx_desc exceeds %d limit. Please use Normal buffers",
+ DPAA2_PMD_WARN("Total nb_rx_desc exceeds %d limit. Please use Normal buffers",
MAX_NB_RX_DESC);
DPAA2_PMD_WARN("To use Normal buffers, run 'export DPNI_NORMAL_BUF=1' before running dynamic_dpl.sh script");
}
@@ -1063,7 +1063,7 @@ dpaa2_dev_rx_queue_count(void *rx_queue)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return -EINVAL;
}
@@ -1933,7 +1933,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
if (ret == -1)
DPAA2_PMD_DEBUG("No change in status");
else
- DPAA2_PMD_INFO("Port %d Link is %s\n", dev->data->port_id,
+ DPAA2_PMD_INFO("Port %d Link is %s", dev->data->port_id,
link.link_status ? "Up" : "Down");
return ret;
@@ -2307,7 +2307,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
dpaa2_ethq->tc_index, flow_id,
OPR_OPT_CREATE, &ocfg, 0);
if (ret) {
- DPAA2_PMD_ERR("Error setting opr: ret: %d\n", ret);
+ DPAA2_PMD_ERR("Error setting opr: ret: %d", ret);
return ret;
}
@@ -2423,7 +2423,7 @@ rte_pmd_dpaa2_thread_init(void)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return;
}
@@ -2838,7 +2838,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
WRIOP_SS_INITIALIZER(priv);
ret = dpaa2_eth_load_wriop_soft_parser(priv, DPNI_SS_INGRESS);
if (ret < 0) {
- DPAA2_PMD_ERR(" Error(%d) in loading softparser\n",
+ DPAA2_PMD_ERR(" Error(%d) in loading softparser",
ret);
return ret;
}
@@ -2846,7 +2846,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
ret = dpaa2_eth_enable_wriop_soft_parser(priv,
DPNI_SS_INGRESS);
if (ret < 0) {
- DPAA2_PMD_ERR(" Error(%d) in enabling softparser\n",
+ DPAA2_PMD_ERR(" Error(%d) in enabling softparser",
ret);
return ret;
}
@@ -2929,7 +2929,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
DPAA2_MAX_SGS * sizeof(struct qbman_sge),
rte_socket_id());
if (dpaa2_tx_sg_pool == NULL) {
- DPAA2_PMD_ERR("SG pool creation failed\n");
+ DPAA2_PMD_ERR("SG pool creation failed");
return -ENOMEM;
}
}
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index eec7e60650..e590f6f748 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -3360,7 +3360,7 @@ dpaa2_flow_verify_action(
rxq = priv->rx_vq[rss_conf->queue[i]];
if (rxq->tc_index != attr->group) {
DPAA2_PMD_ERR(
- "Queue/Group combination are not supported\n");
+ "Queue/Group combination are not supported");
return -ENOTSUP;
}
}
@@ -3601,7 +3601,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS QoS table can not be configured(%d)\n",
+ "RSS QoS table can not be configured(%d)",
ret);
return -1;
}
@@ -3718,14 +3718,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
&priv->extract.tc_key_extract[flow->tc_id].dpkg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "unable to set flow distribution.please check queue config\n");
+ "unable to set flow distribution.please check queue config");
return ret;
}
/* Allocate DMA'ble memory to write the rules */
param = (size_t)rte_malloc(NULL, 256, 64);
if (!param) {
- DPAA2_PMD_ERR("Memory allocation failure\n");
+ DPAA2_PMD_ERR("Memory allocation failure");
return -1;
}
@@ -3747,7 +3747,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &tc_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS TC table cannot be configured: %d\n",
+ "RSS TC table cannot be configured: %d",
ret);
rte_free((void *)param);
return -1;
@@ -3772,7 +3772,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
priv->token, &qos_cfg);
if (ret < 0) {
DPAA2_PMD_ERR(
- "RSS QoS dist can't be configured-%d\n",
+ "RSS QoS dist can't be configured-%d",
ret);
return -1;
}
@@ -3841,20 +3841,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
int ret = 0;
if (unlikely(attr->group >= dpni_attr->num_rx_tcs)) {
- DPAA2_PMD_ERR("Priority group is out of range\n");
+ DPAA2_PMD_ERR("Priority group is out of range");
ret = -ENOTSUP;
}
if (unlikely(attr->priority >= dpni_attr->fs_entries)) {
- DPAA2_PMD_ERR("Priority within the group is out of range\n");
+ DPAA2_PMD_ERR("Priority within the group is out of range");
ret = -ENOTSUP;
}
if (unlikely(attr->egress)) {
DPAA2_PMD_ERR(
- "Flow configuration is not supported on egress side\n");
+ "Flow configuration is not supported on egress side");
ret = -ENOTSUP;
}
if (unlikely(!attr->ingress)) {
- DPAA2_PMD_ERR("Ingress flag must be configured\n");
+ DPAA2_PMD_ERR("Ingress flag must be configured");
ret = -EINVAL;
}
return ret;
@@ -3933,7 +3933,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpni_get_attributes(dpni, CMD_PRI_LOW, token, &dpni_attr);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Failure to get dpni@%p attribute, err code %d\n",
+ "Failure to get dpni@%p attribute, err code %d",
dpni, ret);
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ATTR,
@@ -3945,7 +3945,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpaa2_dev_verify_attr(&dpni_attr, flow_attr);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Invalid attributes are given\n");
+ "Invalid attributes are given");
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ATTR,
flow_attr, "invalid");
@@ -3955,7 +3955,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpaa2_dev_verify_patterns(pattern);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Invalid pattern list is given\n");
+ "Invalid pattern list is given");
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ITEM,
pattern, "invalid");
@@ -3965,7 +3965,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
ret = dpaa2_dev_verify_actions(actions);
if (ret < 0) {
DPAA2_PMD_ERR(
- "Invalid action list is given\n");
+ "Invalid action list is given");
rte_flow_error_set(error, EPERM,
RTE_FLOW_ERROR_TYPE_ACTION,
actions, "invalid");
@@ -4012,13 +4012,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!key_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!mask_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
@@ -4029,13 +4029,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
key_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!key_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
mask_iova = (size_t)rte_zmalloc(NULL, 256, 64);
if (!mask_iova) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto mem_failure;
}
diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
index 2ff1a98fda..7dd5a60966 100644
--- a/drivers/net/dpaa2/dpaa2_mux.c
+++ b/drivers/net/dpaa2/dpaa2_mux.c
@@ -88,7 +88,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
(2 * DIST_PARAM_IOVA_SIZE), RTE_CACHE_LINE_SIZE);
if (!flow) {
DPAA2_PMD_ERR(
- "Memory allocation failure for rule configuration\n");
+ "Memory allocation failure for rule configuration");
goto creation_error;
}
key_iova = (void *)((size_t)flow + sizeof(struct rte_flow));
@@ -211,7 +211,7 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
vf_conf = (const struct rte_flow_action_vf *)(actions[0]->conf);
if (vf_conf->id == 0 || vf_conf->id > dpdmux_dev->num_ifs) {
- DPAA2_PMD_ERR("Invalid destination id\n");
+ DPAA2_PMD_ERR("Invalid destination id");
goto creation_error;
}
dpdmux_action.dest_if = vf_conf->id;
diff --git a/drivers/net/dpaa2/dpaa2_recycle.c b/drivers/net/dpaa2/dpaa2_recycle.c
index fbfdf360d1..4fde9b95a0 100644
--- a/drivers/net/dpaa2/dpaa2_recycle.c
+++ b/drivers/net/dpaa2/dpaa2_recycle.c
@@ -423,7 +423,7 @@ ls_mac_serdes_lpbk_support(uint16_t mac_id,
sd_idx = ls_serdes_cfg_to_idx(sd_cfg, sd_id);
if (sd_idx < 0) {
- DPAA2_PMD_ERR("Serdes protocol(0x%02x) does not exist\n",
+ DPAA2_PMD_ERR("Serdes protocol(0x%02x) does not exist",
sd_cfg);
return false;
}
@@ -552,7 +552,7 @@ ls_serdes_eth_lpbk(uint16_t mac_id, int en)
(serdes_id - LSX_SERDES_1) * 0x10000,
sizeof(struct ccsr_ls_serdes) / 64 * 64 + 64);
if (!serdes_base) {
- DPAA2_PMD_ERR("Serdes register map failed\n");
+ DPAA2_PMD_ERR("Serdes register map failed");
return -ENOMEM;
}
@@ -587,7 +587,7 @@ lx_serdes_eth_lpbk(uint16_t mac_id, int en)
(serdes_id - LSX_SERDES_1) * 0x10000,
sizeof(struct ccsr_lx_serdes) / 64 * 64 + 64);
if (!serdes_base) {
- DPAA2_PMD_ERR("Serdes register map failed\n");
+ DPAA2_PMD_ERR("Serdes register map failed");
return -ENOMEM;
}
diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index 23f7c4132d..b64232b88f 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -640,7 +640,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
if (unlikely(!DPAA2_PER_LCORE_DPIO)) {
ret = dpaa2_affine_qbman_swp();
if (ret) {
- DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d\n",
+ DPAA2_PMD_ERR("Failed to allocate IO portal, tid: %d",
rte_gettid());
return;
}
@@ -691,7 +691,7 @@ dump_err_pkts(struct dpaa2_queue *dpaa2_q)
hw_annot_addr = (void *)((size_t)v_addr + DPAA2_FD_PTA_SIZE);
fas = hw_annot_addr;
- DPAA2_PMD_ERR("\n\n[%d] error packet on port[%d]:"
+ DPAA2_PMD_ERR("[%d] error packet on port[%d]:"
" fd_off: %d, fd_err: %x, fas_status: %x",
rte_lcore_id(), eth_data->port_id,
DPAA2_GET_FD_OFFSET(fd), DPAA2_GET_FD_ERR(fd),
@@ -976,7 +976,7 @@ dpaa2_dev_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1107,7 +1107,7 @@ uint16_t dpaa2_dev_tx_conf(void *queue)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1256,7 +1256,7 @@ dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1573,7 +1573,7 @@ dpaa2_dev_tx_multi_txq_ordered(void **queue,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -1747,7 +1747,7 @@ dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_PMD_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
diff --git a/drivers/net/dpaa2/dpaa2_sparser.c b/drivers/net/dpaa2/dpaa2_sparser.c
index 63463c4fbf..eb649fb063 100644
--- a/drivers/net/dpaa2/dpaa2_sparser.c
+++ b/drivers/net/dpaa2/dpaa2_sparser.c
@@ -165,7 +165,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
addr = rte_malloc(NULL, sp_param.size, 64);
if (!addr) {
- DPAA2_PMD_ERR("Memory unavailable for soft parser param\n");
+ DPAA2_PMD_ERR("Memory unavailable for soft parser param");
return -1;
}
@@ -174,7 +174,7 @@ int dpaa2_eth_load_wriop_soft_parser(struct dpaa2_dev_priv *priv,
ret = dpni_load_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
if (ret) {
- DPAA2_PMD_ERR("dpni_load_sw_sequence failed\n");
+ DPAA2_PMD_ERR("dpni_load_sw_sequence failed");
rte_free(addr);
return ret;
}
@@ -214,7 +214,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
if (cfg.param_size) {
param_addr = rte_malloc(NULL, cfg.param_size, 64);
if (!param_addr) {
- DPAA2_PMD_ERR("Memory unavailable for soft parser param\n");
+ DPAA2_PMD_ERR("Memory unavailable for soft parser param");
return -1;
}
@@ -227,7 +227,7 @@ int dpaa2_eth_enable_wriop_soft_parser(struct dpaa2_dev_priv *priv,
ret = dpni_enable_sw_sequence(dpni, CMD_PRI_LOW, priv->token, &cfg);
if (ret) {
- DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d\n",
+ DPAA2_PMD_ERR("dpni_enable_sw_sequence failed for dpni%d",
priv->hw_id);
rte_free(param_addr);
return ret;
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 8fe5bfa013..3c0f282ec3 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -584,7 +584,7 @@ dpaa2_tm_configure_queue(struct rte_eth_dev *dev, struct dpaa2_tm_node *node)
return -1;
}
- DPAA2_PMD_DEBUG("tc_id = %d, channel = %d\n\n", tc_id,
+ DPAA2_PMD_DEBUG("tc_id = %d, channel = %d", tc_id,
node->parent->channel_id);
ret = dpni_set_queue(dpni, CMD_PRI_LOW, priv->token, DPNI_QUEUE_TX,
((node->parent->channel_id << 8) | tc_id),
@@ -653,7 +653,7 @@ dpaa2_tm_sort_and_configure(struct rte_eth_dev *dev,
int i;
if (n == 1) {
- DPAA2_PMD_DEBUG("node id = %d\n, priority = %d, index = %d\n",
+ DPAA2_PMD_DEBUG("node id = %d, priority = %d, index = %d",
nodes[n - 1]->id, nodes[n - 1]->priority,
n - 1);
dpaa2_tm_configure_queue(dev, nodes[n - 1]);
@@ -669,7 +669,7 @@ dpaa2_tm_sort_and_configure(struct rte_eth_dev *dev,
}
dpaa2_tm_sort_and_configure(dev, nodes, n - 1);
- DPAA2_PMD_DEBUG("node id = %d\n, priority = %d, index = %d\n",
+ DPAA2_PMD_DEBUG("node id = %d, priority = %d, index = %d",
nodes[n - 1]->id, nodes[n - 1]->priority,
n - 1);
dpaa2_tm_configure_queue(dev, nodes[n - 1]);
@@ -709,7 +709,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
}
}
if (i > 0) {
- DPAA2_PMD_DEBUG("Configure queues\n");
+ DPAA2_PMD_DEBUG("Configure queues");
dpaa2_tm_sort_and_configure(dev, nodes, i);
}
}
@@ -733,13 +733,13 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
node->profile->params.peak.rate / (1024 * 1024);
/* root node */
if (node->parent == NULL) {
- DPAA2_PMD_DEBUG("LNI S.rate = %u, burst =%u\n",
+ DPAA2_PMD_DEBUG("LNI S.rate = %u, burst =%u",
tx_cr_shaper.rate_limit,
tx_cr_shaper.max_burst_size);
param = 0x2;
param |= node->profile->params.pkt_length_adjust << 16;
} else {
- DPAA2_PMD_DEBUG("Channel = %d S.rate = %u\n",
+ DPAA2_PMD_DEBUG("Channel = %d S.rate = %u",
node->channel_id,
tx_cr_shaper.rate_limit);
param = (node->channel_id << 8);
@@ -871,15 +871,15 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
"Scheduling Failed\n");
goto out;
}
- DPAA2_PMD_DEBUG("########################################\n");
- DPAA2_PMD_DEBUG("Channel idx = %d\n", prio_cfg.channel_idx);
+ DPAA2_PMD_DEBUG("########################################");
+ DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
for (t = 0; t < DPNI_MAX_TC; t++) {
DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
- DPAA2_PMD_DEBUG("delta = %d\n", prio_cfg.tc_sched[t].delta_bandwidth);
+ DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
}
- DPAA2_PMD_DEBUG("prioritya = %d\n", prio_cfg.prio_group_A);
- DPAA2_PMD_DEBUG("priorityb = %d\n", prio_cfg.prio_group_B);
- DPAA2_PMD_DEBUG("separate grps = %d\n\n", prio_cfg.separate_groups);
+ DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
+ DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
+ DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
}
return 0;
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index 8858f975f8..d64a1aedd3 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -5053,7 +5053,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
PMD_DRV_LOG(ERR,
"Address change required to access page 0xA2, "
"but not supported. Please report the module "
- "type to the driver maintainers.\n");
+ "type to the driver maintainers.");
page_swap = true;
}
diff --git a/drivers/net/enetc/enetc_ethdev.c b/drivers/net/enetc/enetc_ethdev.c
index c9352f0746..d8c30ef150 100644
--- a/drivers/net/enetc/enetc_ethdev.c
+++ b/drivers/net/enetc/enetc_ethdev.c
@@ -150,7 +150,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
char buf[RTE_ETHER_ADDR_FMT_SIZE];
rte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);
- ENETC_PMD_NOTICE("%s%s\n", name, buf);
+ ENETC_PMD_NOTICE("%s%s", name, buf);
}
static int
@@ -197,7 +197,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
char *first_byte;
ENETC_PMD_NOTICE("MAC is not available for this SI, "
- "set random MAC\n");
+ "set random MAC");
mac = (uint32_t *)hw->mac.addr;
*mac = (uint32_t)rte_rand();
first_byte = (char *)mac;
diff --git a/drivers/net/enetfec/enet_ethdev.c b/drivers/net/enetfec/enet_ethdev.c
index 898aad1c37..8c7067fbb5 100644
--- a/drivers/net/enetfec/enet_ethdev.c
+++ b/drivers/net/enetfec/enet_ethdev.c
@@ -253,7 +253,7 @@ enetfec_eth_link_update(struct rte_eth_dev *dev,
link.link_status = lstatus;
link.link_speed = RTE_ETH_SPEED_NUM_1G;
- ENETFEC_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ ENETFEC_PMD_INFO("Port (%d) link is %s", dev->data->port_id,
"Up");
return rte_eth_linkstatus_set(dev, &link);
@@ -462,7 +462,7 @@ enetfec_rx_queue_setup(struct rte_eth_dev *dev,
}
if (queue_idx >= ENETFEC_MAX_Q) {
- ENETFEC_PMD_ERR("Invalid queue id %" PRIu16 ", max %d\n",
+ ENETFEC_PMD_ERR("Invalid queue id %" PRIu16 ", max %d",
queue_idx, ENETFEC_MAX_Q);
return -EINVAL;
}
diff --git a/drivers/net/enetfec/enet_uio.c b/drivers/net/enetfec/enet_uio.c
index 6539cbb354..9f4e896985 100644
--- a/drivers/net/enetfec/enet_uio.c
+++ b/drivers/net/enetfec/enet_uio.c
@@ -177,7 +177,7 @@ config_enetfec_uio(struct enetfec_private *fep)
/* Mapping is done only one time */
if (enetfec_count > 0) {
- ENETFEC_PMD_INFO("Mapped!\n");
+ ENETFEC_PMD_INFO("Mapped!");
return 0;
}
@@ -191,7 +191,7 @@ config_enetfec_uio(struct enetfec_private *fep)
/* Open device file */
uio_job->uio_fd = open(uio_device_file_name, O_RDWR);
if (uio_job->uio_fd < 0) {
- ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file\n");
+ ENETFEC_PMD_WARN("Unable to open ENETFEC_UIO file");
return -1;
}
@@ -230,7 +230,7 @@ enetfec_configure(void)
d = opendir(FEC_UIO_DEVICE_SYS_ATTR_PATH);
if (d == NULL) {
- ENETFEC_PMD_ERR("\nError opening directory '%s': %s\n",
+ ENETFEC_PMD_ERR("Error opening directory '%s': %s",
FEC_UIO_DEVICE_SYS_ATTR_PATH, strerror(errno));
return -1;
}
@@ -249,7 +249,7 @@ enetfec_configure(void)
ret = sscanf(dir->d_name + strlen("uio"), "%d",
&uio_minor_number);
if (ret < 0)
- ENETFEC_PMD_ERR("Error: not find minor number\n");
+ ENETFEC_PMD_ERR("Error: not find minor number");
/*
* Open file uioX/name and read first line which
* contains the name for the device. Based on the
@@ -259,7 +259,7 @@ enetfec_configure(void)
ret = file_read_first_line(FEC_UIO_DEVICE_SYS_ATTR_PATH,
dir->d_name, "name", uio_name);
if (ret != 0) {
- ENETFEC_PMD_INFO("file_read_first_line failed\n");
+ ENETFEC_PMD_INFO("file_read_first_line failed");
closedir(d);
return -1;
}
diff --git a/drivers/net/enic/enic_ethdev.c b/drivers/net/enic/enic_ethdev.c
index b04b6c9aa1..1121874346 100644
--- a/drivers/net/enic/enic_ethdev.c
+++ b/drivers/net/enic/enic_ethdev.c
@@ -670,7 +670,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
char mac_str[RTE_ETHER_ADDR_FMT_SIZE];
rte_ether_format_addr(mac_str, RTE_ETHER_ADDR_FMT_SIZE, addr);
- ENICPMD_LOG(DEBUG, " %s address %s\n",
+ ENICPMD_LOG(DEBUG, " %s address %s",
add ? "add" : "remove", mac_str);
}
@@ -693,7 +693,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
rte_is_broadcast_ether_addr(addr)) {
rte_ether_format_addr(mac_str,
RTE_ETHER_ADDR_FMT_SIZE, addr);
- ENICPMD_LOG(ERR, " invalid multicast address %s\n",
+ ENICPMD_LOG(ERR, " invalid multicast address %s",
mac_str);
return -EINVAL;
}
@@ -701,7 +701,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
/* Flush all if requested */
if (nb_mc_addr == 0 || mc_addr_set == NULL) {
- ENICPMD_LOG(DEBUG, " flush multicast addresses\n");
+ ENICPMD_LOG(DEBUG, " flush multicast addresses");
for (i = 0; i < enic->mc_count; i++) {
addr = &enic->mc_addrs[i];
debug_log_add_del_addr(addr, false);
@@ -714,7 +714,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
}
if (nb_mc_addr > ENIC_MULTICAST_PERFECT_FILTERS) {
- ENICPMD_LOG(ERR, " too many multicast addresses: max=%d\n",
+ ENICPMD_LOG(ERR, " too many multicast addresses: max=%d",
ENIC_MULTICAST_PERFECT_FILTERS);
return -ENOSPC;
}
@@ -980,7 +980,7 @@ static int udp_tunnel_common_check(struct enic *enic,
tnl->prot_type != RTE_ETH_TUNNEL_TYPE_GENEVE)
return -ENOTSUP;
if (!enic->overlay_offload) {
- ENICPMD_LOG(DEBUG, " overlay offload is not supported\n");
+ ENICPMD_LOG(DEBUG, " overlay offload is not supported");
return -ENOTSUP;
}
return 0;
@@ -993,10 +993,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
cfg = vxlan ? OVERLAY_CFG_VXLAN_PORT_UPDATE :
OVERLAY_CFG_GENEVE_PORT_UPDATE;
if (vnic_dev_overlay_offload_cfg(enic->vdev, cfg, port)) {
- ENICPMD_LOG(DEBUG, " failed to update tunnel port\n");
+ ENICPMD_LOG(DEBUG, " failed to update tunnel port");
return -EINVAL;
}
- ENICPMD_LOG(DEBUG, " updated %s port to %u\n",
+ ENICPMD_LOG(DEBUG, " updated %s port to %u",
vxlan ? "vxlan" : "geneve", port);
if (vxlan)
enic->vxlan_port = port;
@@ -1027,7 +1027,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
* "Adding" a new port number replaces it.
*/
if (tnl->udp_port == port || tnl->udp_port == 0) {
- ENICPMD_LOG(DEBUG, " %u is already configured or invalid\n",
+ ENICPMD_LOG(DEBUG, " %u is already configured or invalid",
tnl->udp_port);
return -EINVAL;
}
@@ -1059,7 +1059,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
* which is tied to inner RSS and TSO.
*/
if (tnl->udp_port != port) {
- ENICPMD_LOG(DEBUG, " %u is not a configured tunnel port\n",
+ ENICPMD_LOG(DEBUG, " %u is not a configured tunnel port",
tnl->udp_port);
return -EINVAL;
}
@@ -1323,7 +1323,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
}
if (eth_da.nb_representor_ports > 0 &&
eth_da.type != RTE_ETH_REPRESENTOR_VF) {
- ENICPMD_LOG(ERR, "unsupported representor type: %s\n",
+ ENICPMD_LOG(ERR, "unsupported representor type: %s",
pci_dev->device.devargs->args);
return -ENOTSUP;
}
diff --git a/drivers/net/enic/enic_flow.c b/drivers/net/enic/enic_flow.c
index e6c9ad442a..758000ea21 100644
--- a/drivers/net/enic/enic_flow.c
+++ b/drivers/net/enic/enic_flow.c
@@ -1351,14 +1351,14 @@ static void
enic_dump_actions(const struct filter_action_v2 *ea)
{
if (ea->type == FILTER_ACTION_RQ_STEERING) {
- ENICPMD_LOG(INFO, "Action(V1), queue: %u\n", ea->rq_idx);
+ ENICPMD_LOG(INFO, "Action(V1), queue: %u", ea->rq_idx);
} else if (ea->type == FILTER_ACTION_V2) {
- ENICPMD_LOG(INFO, "Actions(V2)\n");
+ ENICPMD_LOG(INFO, "Actions(V2)");
if (ea->flags & FILTER_ACTION_RQ_STEERING_FLAG)
- ENICPMD_LOG(INFO, "\tqueue: %u\n",
+ ENICPMD_LOG(INFO, "\tqueue: %u",
enic_sop_rq_idx_to_rte_idx(ea->rq_idx));
if (ea->flags & FILTER_ACTION_FILTER_ID_FLAG)
- ENICPMD_LOG(INFO, "\tfilter_id: %u\n", ea->filter_id);
+ ENICPMD_LOG(INFO, "\tfilter_id: %u", ea->filter_id);
}
}
@@ -1374,13 +1374,13 @@ enic_dump_filter(const struct filter_v2 *filt)
switch (filt->type) {
case FILTER_IPV4_5TUPLE:
- ENICPMD_LOG(INFO, "FILTER_IPV4_5TUPLE\n");
+ ENICPMD_LOG(INFO, "FILTER_IPV4_5TUPLE");
break;
case FILTER_USNIC_IP:
case FILTER_DPDK_1:
/* FIXME: this should be a loop */
gp = &filt->u.generic_1;
- ENICPMD_LOG(INFO, "Filter: vlan: 0x%04x, mask: 0x%04x\n",
+ ENICPMD_LOG(INFO, "Filter: vlan: 0x%04x, mask: 0x%04x",
gp->val_vlan, gp->mask_vlan);
if (gp->mask_flags & FILTER_GENERIC_1_IPV4)
@@ -1438,7 +1438,7 @@ enic_dump_filter(const struct filter_v2 *filt)
? "ipfrag(y)" : "ipfrag(n)");
else
sprintf(ipfrag, "%s ", "ipfrag(x)");
- ENICPMD_LOG(INFO, "\tFlags: %s%s%s%s%s%s%s%s\n", ip4, ip6, udp,
+ ENICPMD_LOG(INFO, "\tFlags: %s%s%s%s%s%s%s%s", ip4, ip6, udp,
tcp, tcpudp, ip4csum, l4csum, ipfrag);
for (i = 0; i < FILTER_GENERIC_1_NUM_LAYERS; i++) {
@@ -1455,7 +1455,7 @@ enic_dump_filter(const struct filter_v2 *filt)
bp += 2;
}
*bp = '\0';
- ENICPMD_LOG(INFO, "\tL%u mask: %s\n", i + 2, buf);
+ ENICPMD_LOG(INFO, "\tL%u mask: %s", i + 2, buf);
bp = buf;
for (j = 0; j <= mbyte; j++) {
sprintf(bp, "%02x",
@@ -1463,11 +1463,11 @@ enic_dump_filter(const struct filter_v2 *filt)
bp += 2;
}
*bp = '\0';
- ENICPMD_LOG(INFO, "\tL%u val: %s\n", i + 2, buf);
+ ENICPMD_LOG(INFO, "\tL%u val: %s", i + 2, buf);
}
break;
default:
- ENICPMD_LOG(INFO, "FILTER UNKNOWN\n");
+ ENICPMD_LOG(INFO, "FILTER UNKNOWN");
break;
}
}
diff --git a/drivers/net/enic/enic_vf_representor.c b/drivers/net/enic/enic_vf_representor.c
index 5d8d29135c..8469e06de9 100644
--- a/drivers/net/enic/enic_vf_representor.c
+++ b/drivers/net/enic/enic_vf_representor.c
@@ -64,7 +64,7 @@ static int enic_vf_dev_tx_queue_setup(struct rte_eth_dev *eth_dev,
/* Pass vf not pf because of cq index calculation. See enic_alloc_wq */
err = enic_alloc_wq(&vf->enic, queue_idx, socket_id, nb_desc);
if (err) {
- ENICPMD_LOG(ERR, "error in allocating wq\n");
+ ENICPMD_LOG(ERR, "error in allocating wq");
return err;
}
return 0;
@@ -104,7 +104,7 @@ static int enic_vf_dev_rx_queue_setup(struct rte_eth_dev *eth_dev,
ret = enic_alloc_rq(&vf->enic, queue_idx, socket_id, mp, nb_desc,
rx_conf->rx_free_thresh);
if (ret) {
- ENICPMD_LOG(ERR, "error in allocating rq\n");
+ ENICPMD_LOG(ERR, "error in allocating rq");
return ret;
}
return 0;
@@ -230,14 +230,14 @@ static int enic_vf_dev_start(struct rte_eth_dev *eth_dev)
/* enic_enable */
ret = enic_alloc_rx_queue_mbufs(pf, &pf->rq[index]);
if (ret) {
- ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs\n");
+ ENICPMD_LOG(ERR, "Failed to alloc sop RX queue mbufs");
return ret;
}
ret = enic_alloc_rx_queue_mbufs(pf, data_rq);
if (ret) {
/* Release the allocated mbufs for the sop rq*/
enic_rxmbuf_queue_release(pf, &pf->rq[index]);
- ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs\n");
+ ENICPMD_LOG(ERR, "Failed to alloc data RX queue mbufs");
return ret;
}
enic_start_rq(pf, vf->pf_rq_sop_idx);
@@ -430,7 +430,7 @@ static int enic_vf_stats_get(struct rte_eth_dev *eth_dev,
/* Get VF stats via PF */
err = vnic_dev_stats_dump(vf->enic.vdev, &vs);
if (err) {
- ENICPMD_LOG(ERR, "error in getting stats\n");
+ ENICPMD_LOG(ERR, "error in getting stats");
return err;
}
stats->ipackets = vs->rx.rx_frames_ok;
@@ -453,7 +453,7 @@ static int enic_vf_stats_reset(struct rte_eth_dev *eth_dev)
/* Ask PF to clear VF stats */
err = vnic_dev_stats_clear(vf->enic.vdev);
if (err)
- ENICPMD_LOG(ERR, "error in clearing stats\n");
+ ENICPMD_LOG(ERR, "error in clearing stats");
return err;
}
@@ -581,7 +581,7 @@ static int get_vf_config(struct enic_vf_representor *vf)
/* VF MAC */
err = vnic_dev_get_mac_addr(vf->enic.vdev, vf->mac_addr.addr_bytes);
if (err) {
- ENICPMD_LOG(ERR, "error in getting MAC address\n");
+ ENICPMD_LOG(ERR, "error in getting MAC address");
return err;
}
rte_ether_addr_copy(&vf->mac_addr, vf->eth_dev->data->mac_addrs);
@@ -591,7 +591,7 @@ static int get_vf_config(struct enic_vf_representor *vf)
offsetof(struct vnic_enet_config, mtu),
sizeof(c->mtu), &c->mtu);
if (err) {
- ENICPMD_LOG(ERR, "error in getting MTU\n");
+ ENICPMD_LOG(ERR, "error in getting MTU");
return err;
}
/*
diff --git a/drivers/net/failsafe/failsafe_args.c b/drivers/net/failsafe/failsafe_args.c
index 3b867437d7..1b8f1d3050 100644
--- a/drivers/net/failsafe/failsafe_args.c
+++ b/drivers/net/failsafe/failsafe_args.c
@@ -406,7 +406,7 @@ failsafe_args_parse(struct rte_eth_dev *dev, const char *params)
kvlist = rte_kvargs_parse(mut_params,
pmd_failsafe_init_parameters);
if (kvlist == NULL) {
- ERROR("Error parsing parameters, usage:\n"
+ ERROR("Error parsing parameters, usage:"
PMD_FAILSAFE_PARAM_STRING);
return -1;
}
diff --git a/drivers/net/failsafe/failsafe_eal.c b/drivers/net/failsafe/failsafe_eal.c
index d71b512f81..e79d3b4120 100644
--- a/drivers/net/failsafe/failsafe_eal.c
+++ b/drivers/net/failsafe/failsafe_eal.c
@@ -16,7 +16,7 @@ fs_ethdev_portid_get(const char *name, uint16_t *port_id)
size_t len;
if (name == NULL) {
- DEBUG("Null pointer is specified\n");
+ DEBUG("Null pointer is specified");
return -EINVAL;
}
len = strlen(name);
diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c
index 031f3eb13f..dc4aba6e30 100644
--- a/drivers/net/failsafe/failsafe_ether.c
+++ b/drivers/net/failsafe/failsafe_ether.c
@@ -38,7 +38,7 @@ fs_flow_complain(struct rte_flow_error *error)
errstr = "unknown type";
else
errstr = errstrlist[error->type];
- ERROR("Caught error type %d (%s): %s%s\n",
+ ERROR("Caught error type %d (%s): %s%s",
error->type, errstr,
error->cause ? (snprintf(buf, sizeof(buf), "cause: %p, ",
error->cause), buf) : "",
@@ -640,7 +640,7 @@ failsafe_eth_new_event_callback(uint16_t port_id,
if (sdev->state >= DEV_PROBED)
continue;
if (dev->device == NULL) {
- WARN("Trying to probe malformed device %s.\n",
+ WARN("Trying to probe malformed device %s.",
sdev->devargs.name);
continue;
}
diff --git a/drivers/net/failsafe/failsafe_intr.c b/drivers/net/failsafe/failsafe_intr.c
index 969ded6ced..68b7310b85 100644
--- a/drivers/net/failsafe/failsafe_intr.c
+++ b/drivers/net/failsafe/failsafe_intr.c
@@ -173,17 +173,17 @@ fs_rx_event_proxy_service_install(struct fs_priv *priv)
/* run the service */
ret = rte_service_component_runstate_set(priv->rxp.sid, 1);
if (ret < 0) {
- ERROR("Failed Setting component runstate\n");
+ ERROR("Failed Setting component runstate");
return ret;
}
ret = rte_service_set_stats_enable(priv->rxp.sid, 1);
if (ret < 0) {
- ERROR("Failed enabling stats\n");
+ ERROR("Failed enabling stats");
return ret;
}
ret = rte_service_runstate_set(priv->rxp.sid, 1);
if (ret < 0) {
- ERROR("Failed to run service\n");
+ ERROR("Failed to run service");
return ret;
}
priv->rxp.sstate = SS_READY;
diff --git a/drivers/net/gve/base/gve_adminq.c b/drivers/net/gve/base/gve_adminq.c
index 343bd13d67..438c0c5441 100644
--- a/drivers/net/gve/base/gve_adminq.c
+++ b/drivers/net/gve/base/gve_adminq.c
@@ -11,7 +11,7 @@
#define GVE_ADMINQ_SLEEP_LEN 20
#define GVE_MAX_ADMINQ_EVENT_COUNTER_CHECK 100
-#define GVE_DEVICE_OPTION_ERROR_FMT "%s option error:\n Expected: length=%d, feature_mask=%x.\n Actual: length=%d, feature_mask=%x."
+#define GVE_DEVICE_OPTION_ERROR_FMT "%s option error: Expected: length=%d, feature_mask=%x. Actual: length=%d, feature_mask=%x."
#define GVE_DEVICE_OPTION_TOO_BIG_FMT "Length of %s option larger than expected. Possible older version of guest driver."
diff --git a/drivers/net/hinic/base/hinic_pmd_eqs.c b/drivers/net/hinic/base/hinic_pmd_eqs.c
index fecb653401..f0e1139a98 100644
--- a/drivers/net/hinic/base/hinic_pmd_eqs.c
+++ b/drivers/net/hinic/base/hinic_pmd_eqs.c
@@ -471,7 +471,7 @@ int hinic_comm_aeqs_init(struct hinic_hwdev *hwdev)
num_aeqs = HINIC_HWIF_NUM_AEQS(hwdev->hwif);
if (num_aeqs < HINIC_MIN_AEQS) {
- PMD_DRV_LOG(ERR, "PMD need %d AEQs, Chip has %d\n",
+ PMD_DRV_LOG(ERR, "PMD need %d AEQs, Chip has %d",
HINIC_MIN_AEQS, num_aeqs);
return -EINVAL;
}
diff --git a/drivers/net/hinic/base/hinic_pmd_mbox.c b/drivers/net/hinic/base/hinic_pmd_mbox.c
index 92a7cc1a11..a75a6953ad 100644
--- a/drivers/net/hinic/base/hinic_pmd_mbox.c
+++ b/drivers/net/hinic/base/hinic_pmd_mbox.c
@@ -310,7 +310,7 @@ static int mbox_msg_ack_aeqn(struct hinic_hwdev *hwdev)
/* This is used for ovs */
msg_ack_aeqn = HINIC_AEQN_1;
} else {
- PMD_DRV_LOG(ERR, "Warning: Invalid aeq num: %d\n", aeq_num);
+ PMD_DRV_LOG(ERR, "Warning: Invalid aeq num: %d", aeq_num);
msg_ack_aeqn = -1;
}
@@ -372,13 +372,13 @@ static int init_mbox_info(struct hinic_recv_mbox *mbox_info)
mbox_info->mbox = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
if (!mbox_info->mbox) {
- PMD_DRV_LOG(ERR, "Alloc mbox buf_in mem failed\n");
+ PMD_DRV_LOG(ERR, "Alloc mbox buf_in mem failed");
return -ENOMEM;
}
mbox_info->buf_out = kzalloc(MBOX_MAX_BUF_SZ, GFP_KERNEL);
if (!mbox_info->buf_out) {
- PMD_DRV_LOG(ERR, "Alloc mbox buf_out mem failed\n");
+ PMD_DRV_LOG(ERR, "Alloc mbox buf_out mem failed");
err = -ENOMEM;
goto alloc_buf_out_err;
}
diff --git a/drivers/net/hinic/base/hinic_pmd_niccfg.c b/drivers/net/hinic/base/hinic_pmd_niccfg.c
index 8c08d63286..a08020313f 100644
--- a/drivers/net/hinic/base/hinic_pmd_niccfg.c
+++ b/drivers/net/hinic/base/hinic_pmd_niccfg.c
@@ -683,7 +683,7 @@ int hinic_get_pause_info(void *hwdev, struct nic_pause_config *nic_pause)
&pause_info, sizeof(pause_info),
&pause_info, &out_size);
if (err || !out_size || pause_info.mgmt_msg_head.status) {
- PMD_DRV_LOG(ERR, "Failed to get pause info, err: %d, status: 0x%x, out size: 0x%x\n",
+ PMD_DRV_LOG(ERR, "Failed to get pause info, err: %d, status: 0x%x, out size: 0x%x",
err, pause_info.mgmt_msg_head.status, out_size);
return -EIO;
}
@@ -1332,7 +1332,7 @@ int hinic_get_mgmt_version(void *hwdev, char *fw)
&fw_ver, sizeof(fw_ver), &fw_ver,
&out_size);
if (err || !out_size || fw_ver.mgmt_msg_head.status) {
- PMD_DRV_LOG(ERR, "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x\n",
+ PMD_DRV_LOG(ERR, "Failed to get mgmt version, err: %d, status: 0x%x, out size: 0x%x",
err, fw_ver.mgmt_msg_head.status, out_size);
return -EIO;
}
@@ -1767,7 +1767,7 @@ int hinic_set_fdir_filter(void *hwdev, u8 filter_type, u8 qid, u8 type_enable,
&port_filer_cmd, &out_size);
if (err || !out_size || port_filer_cmd.mgmt_msg_head.status) {
PMD_DRV_LOG(ERR, "Set port Q filter failed, err: %d, status: 0x%x, out size: 0x%x, type: 0x%x,"
- " enable: 0x%x, qid: 0x%x, filter_type_enable: 0x%x\n",
+ " enable: 0x%x, qid: 0x%x, filter_type_enable: 0x%x",
err, port_filer_cmd.mgmt_msg_head.status, out_size,
filter_type, enable, qid, type_enable);
return -EIO;
@@ -1819,7 +1819,7 @@ int hinic_set_normal_filter(void *hwdev, u8 qid, u8 normal_type_enable,
&port_filer_cmd, &out_size);
if (err || !out_size || port_filer_cmd.mgmt_msg_head.status) {
PMD_DRV_LOG(ERR, "Set normal filter failed, err: %d, status: 0x%x, out size: 0x%x, fdir_flag: 0x%x,"
- " enable: 0x%x, qid: 0x%x, normal_type_enable: 0x%x, key:0x%x\n",
+ " enable: 0x%x, qid: 0x%x, normal_type_enable: 0x%x, key:0x%x",
err, port_filer_cmd.mgmt_msg_head.status, out_size,
flag, enable, qid, normal_type_enable, key);
return -EIO;
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index d4978e0649..cb5c013b21 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -1914,7 +1914,7 @@ static int hinic_flow_ctrl_set(struct rte_eth_dev *dev,
nic_dev->nic_pause.rx_pause = nic_pause.rx_pause;
nic_dev->nic_pause.tx_pause = nic_pause.tx_pause;
- PMD_DRV_LOG(INFO, "Set pause options, tx: %s, rx: %s, auto: %s\n",
+ PMD_DRV_LOG(INFO, "Set pause options, tx: %s, rx: %s, auto: %s",
nic_pause.tx_pause ? "on" : "off",
nic_pause.rx_pause ? "on" : "off",
nic_pause.auto_neg ? "on" : "off");
@@ -2559,7 +2559,7 @@ static int hinic_pf_get_default_cos(struct hinic_hwdev *hwdev, u8 *cos_id)
valid_cos_bitmap = hwdev->cfg_mgmt->svc_cap.valid_cos_bitmap;
if (!valid_cos_bitmap) {
- PMD_DRV_LOG(ERR, "PF has none cos to support\n");
+ PMD_DRV_LOG(ERR, "PF has none cos to support");
return -EFAULT;
}
diff --git a/drivers/net/hns3/hns3_dump.c b/drivers/net/hns3/hns3_dump.c
index cb369be5be..a3b58e0a8f 100644
--- a/drivers/net/hns3/hns3_dump.c
+++ b/drivers/net/hns3/hns3_dump.c
@@ -242,7 +242,7 @@ hns3_get_rx_queue(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_rx_queues; queue_id++) {
rx_queues = dev->data->rx_queues;
if (rx_queues == NULL || rx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect rx_queues is NULL!\n");
+ hns3_err(hw, "detect rx_queues is NULL!");
return NULL;
}
@@ -267,7 +267,7 @@ hns3_get_tx_queue(struct rte_eth_dev *dev)
for (queue_id = 0; queue_id < dev->data->nb_tx_queues; queue_id++) {
tx_queues = dev->data->tx_queues;
if (tx_queues == NULL || tx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect tx_queues is NULL!\n");
+ hns3_err(hw, "detect tx_queues is NULL!");
return NULL;
}
@@ -297,7 +297,7 @@ hns3_get_rxtx_fake_queue_info(FILE *file, struct rte_eth_dev *dev)
if (dev->data->nb_rx_queues < dev->data->nb_tx_queues) {
rx_queues = hw->fkq_data.rx_queues;
if (rx_queues == NULL || rx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect rx_queues is NULL!\n");
+ hns3_err(hw, "detect rx_queues is NULL!");
return;
}
rxq = (struct hns3_rx_queue *)rx_queues[queue_id];
@@ -311,7 +311,7 @@ hns3_get_rxtx_fake_queue_info(FILE *file, struct rte_eth_dev *dev)
queue_id = 0;
if (tx_queues == NULL || tx_queues[queue_id] == NULL) {
- hns3_err(hw, "detect tx_queues is NULL!\n");
+ hns3_err(hw, "detect tx_queues is NULL!");
return;
}
txq = (struct hns3_tx_queue *)tx_queues[queue_id];
@@ -961,7 +961,7 @@ hns3_rx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id,
return -EINVAL;
if (num > rxq->nb_rx_desc) {
- hns3_err(hw, "Invalid BD num=%u\n", num);
+ hns3_err(hw, "Invalid BD num=%u", num);
return -EINVAL;
}
@@ -1003,7 +1003,7 @@ hns3_tx_descriptor_dump(const struct rte_eth_dev *dev, uint16_t queue_id,
return -EINVAL;
if (num > txq->nb_tx_desc) {
- hns3_err(hw, "Invalid BD num=%u\n", num);
+ hns3_err(hw, "Invalid BD num=%u", num);
return -EINVAL;
}
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 916bf30dcb..0b768ef140 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -1806,7 +1806,7 @@ enable_tm_err_intr(struct hns3_adapter *hns, bool en)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret)
- hns3_err(hw, "fail to %s TM QCN mem errors, ret = %d\n",
+ hns3_err(hw, "fail to %s TM QCN mem errors, ret = %d",
en ? "enable" : "disable", ret);
return ret;
@@ -1847,7 +1847,7 @@ enable_common_err_intr(struct hns3_adapter *hns, bool en)
ret = hns3_cmd_send(hw, &desc[0], RTE_DIM(desc));
if (ret)
- hns3_err(hw, "fail to %s common err interrupts, ret = %d\n",
+ hns3_err(hw, "fail to %s common err interrupts, ret = %d",
en ? "enable" : "disable", ret);
return ret;
@@ -1984,7 +1984,7 @@ query_num_bds(struct hns3_hw *hw, bool is_ras, uint32_t *mpf_bd_num,
pf_bd_num_val = rte_le_to_cpu_32(desc.data[1]);
if (mpf_bd_num_val < mpf_min_bd_num || pf_bd_num_val < pf_min_bd_num) {
hns3_err(hw, "error bd num: mpf(%u), min_mpf(%u), "
- "pf(%u), min_pf(%u)\n", mpf_bd_num_val, mpf_min_bd_num,
+ "pf(%u), min_pf(%u)", mpf_bd_num_val, mpf_min_bd_num,
pf_bd_num_val, pf_min_bd_num);
return -EINVAL;
}
@@ -2061,7 +2061,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
opcode = HNS3_OPC_QUERY_CLEAR_PF_RAS_INT;
break;
default:
- hns3_err(hw, "error hardware err_type = %d\n", err_type);
+ hns3_err(hw, "error hardware err_type = %d", err_type);
return -EINVAL;
}
@@ -2069,7 +2069,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
hns3_cmd_setup_basic_desc(&desc[0], opcode, true);
ret = hns3_cmd_send(hw, &desc[0], num);
if (ret) {
- hns3_err(hw, "query hw err int 0x%x cmd failed, ret = %d\n",
+ hns3_err(hw, "query hw err int 0x%x cmd failed, ret = %d",
opcode, ret);
return ret;
}
@@ -2097,7 +2097,7 @@ hns3_handle_hw_error(struct hns3_adapter *hns, struct hns3_cmd_desc *desc,
hns3_cmd_reuse_desc(&desc[0], false);
ret = hns3_cmd_send(hw, &desc[0], num);
if (ret)
- hns3_err(hw, "clear all hw err int cmd failed, ret = %d\n",
+ hns3_err(hw, "clear all hw err int cmd failed, ret = %d",
ret);
return ret;
diff --git a/drivers/net/hns3/hns3_ptp.c b/drivers/net/hns3/hns3_ptp.c
index 894ac6dd71..c6e77d21cb 100644
--- a/drivers/net/hns3/hns3_ptp.c
+++ b/drivers/net/hns3/hns3_ptp.c
@@ -50,7 +50,7 @@ hns3_ptp_int_en(struct hns3_hw *hw, bool en)
ret = hns3_cmd_send(hw, &desc, 1);
if (ret)
hns3_err(hw,
- "failed to %s ptp interrupt, ret = %d\n",
+ "failed to %s ptp interrupt, ret = %d",
en ? "enable" : "disable", ret);
return ret;
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index be1be6a89c..955bc7e3af 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -355,7 +355,7 @@ hns3_get_dfx_reg_bd_num(struct hns3_hw *hw, uint32_t *bd_num_list,
ret = hns3_cmd_send(hw, desc, HNS3_GET_DFX_REG_BD_NUM_SIZE);
if (ret) {
- hns3_err(hw, "fail to get dfx bd num, ret = %d.\n", ret);
+ hns3_err(hw, "fail to get dfx bd num, ret = %d.", ret);
return ret;
}
@@ -387,7 +387,7 @@ hns3_dfx_reg_cmd_send(struct hns3_hw *hw, struct hns3_cmd_desc *desc,
ret = hns3_cmd_send(hw, desc, bd_num);
if (ret)
hns3_err(hw, "fail to query dfx registers, opcode = 0x%04X, "
- "ret = %d.\n", opcode, ret);
+ "ret = %d.", opcode, ret);
return ret;
}
diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ffc1f6d874..2b043cd693 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -653,7 +653,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
if (eth_da.nb_representor_ports > 0 &&
eth_da.type != RTE_ETH_REPRESENTOR_VF) {
- PMD_DRV_LOG(ERR, "unsupported representor type: %s\n",
+ PMD_DRV_LOG(ERR, "unsupported representor type: %s",
pci_dev->device.devargs->args);
return -ENOTSUP;
}
@@ -1480,10 +1480,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
val = I40E_READ_REG(hw, I40E_GL_FWSTS);
if (val & I40E_GL_FWSTS_FWS1B_MASK) {
- PMD_INIT_LOG(ERR, "\nERROR: "
- "Firmware recovery mode detected. Limiting functionality.\n"
- "Refer to the Intel(R) Ethernet Adapters and Devices "
- "User Guide for details on firmware recovery mode.");
+ PMD_INIT_LOG(ERR, "ERROR: Firmware recovery mode detected. Limiting functionality.");
return -EIO;
}
@@ -2222,7 +2219,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
status = i40e_aq_get_phy_capabilities(hw, false, true, &phy_ab,
NULL);
if (status) {
- PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
+ PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
status);
return ret;
}
@@ -2232,7 +2229,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
status = i40e_aq_get_phy_capabilities(hw, false, false, &phy_ab,
NULL);
if (status) {
- PMD_DRV_LOG(ERR, "Failed to get the current PHY config: %d\n",
+ PMD_DRV_LOG(ERR, "Failed to get the current PHY config: %d",
status);
return ret;
}
@@ -2257,7 +2254,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
* Warn users and config the default available speeds.
*/
if (is_up && !(force_speed & avail_speed)) {
- PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!\n");
+ PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!");
phy_conf.link_speed = avail_speed;
} else {
phy_conf.link_speed = is_up ? force_speed : avail_speed;
@@ -6814,7 +6811,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
I40E_GL_MDET_TX_QUEUE_SHIFT) -
hw->func_caps.base_queue;
PMD_DRV_LOG(WARNING, "Malicious Driver Detection event 0x%02x on TX "
- "queue %d PF number 0x%02x VF number 0x%02x device %s\n",
+ "queue %d PF number 0x%02x VF number 0x%02x device %s",
event, queue, pf_num, vf_num, dev->data->name);
I40E_WRITE_REG(hw, I40E_GL_MDET_TX, I40E_MDD_CLEAR32);
mdd_detected = true;
@@ -6830,7 +6827,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
hw->func_caps.base_queue;
PMD_DRV_LOG(WARNING, "Malicious Driver Detection event 0x%02x on RX "
- "queue %d of function 0x%02x device %s\n",
+ "queue %d of function 0x%02x device %s",
event, queue, func, dev->data->name);
I40E_WRITE_REG(hw, I40E_GL_MDET_RX, I40E_MDD_CLEAR32);
mdd_detected = true;
@@ -6840,13 +6837,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
reg = I40E_READ_REG(hw, I40E_PF_MDET_TX);
if (reg & I40E_PF_MDET_TX_VALID_MASK) {
I40E_WRITE_REG(hw, I40E_PF_MDET_TX, I40E_MDD_CLEAR16);
- PMD_DRV_LOG(WARNING, "TX driver issue detected on PF\n");
+ PMD_DRV_LOG(WARNING, "TX driver issue detected on PF");
}
reg = I40E_READ_REG(hw, I40E_PF_MDET_RX);
if (reg & I40E_PF_MDET_RX_VALID_MASK) {
I40E_WRITE_REG(hw, I40E_PF_MDET_RX,
I40E_MDD_CLEAR16);
- PMD_DRV_LOG(WARNING, "RX driver issue detected on PF\n");
+ PMD_DRV_LOG(WARNING, "RX driver issue detected on PF");
}
}
@@ -6859,7 +6856,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
I40E_MDD_CLEAR16);
vf->num_mdd_events++;
PMD_DRV_LOG(WARNING, "TX driver issue detected on VF %d %-"
- PRIu64 "times\n",
+ PRIu64 "times",
i, vf->num_mdd_events);
}
@@ -6869,7 +6866,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
I40E_MDD_CLEAR16);
vf->num_mdd_events++;
PMD_DRV_LOG(WARNING, "RX driver issue detected on VF %d %-"
- PRIu64 "times\n",
+ PRIu64 "times",
i, vf->num_mdd_events);
}
}
@@ -11304,7 +11301,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
if (!(hw->flags & I40E_HW_FLAG_AQ_PHY_ACCESS_CAPABLE)) {
PMD_DRV_LOG(ERR,
"Module EEPROM memory read not supported. "
- "Please update the NVM image.\n");
+ "Please update the NVM image.");
return -EINVAL;
}
@@ -11315,7 +11312,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
if (hw->phy.link_info.phy_type == I40E_PHY_TYPE_EMPTY) {
PMD_DRV_LOG(ERR,
"Cannot read module EEPROM memory. "
- "No module connected.\n");
+ "No module connected.");
return -EINVAL;
}
@@ -11345,7 +11342,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
if (sff8472_swap & I40E_MODULE_SFF_ADDR_MODE) {
PMD_DRV_LOG(WARNING,
"Module address swap to access "
- "page 0xA2 is not supported.\n");
+ "page 0xA2 is not supported.");
modinfo->type = RTE_ETH_MODULE_SFF_8079;
modinfo->eeprom_len = RTE_ETH_MODULE_SFF_8079_LEN;
} else if (sff8472_comp == 0x00) {
@@ -11381,7 +11378,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
modinfo->eeprom_len = I40E_MODULE_QSFP_MAX_LEN;
break;
default:
- PMD_DRV_LOG(ERR, "Module type unrecognized\n");
+ PMD_DRV_LOG(ERR, "Module type unrecognized");
return -EINVAL;
}
return 0;
@@ -11683,7 +11680,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
}
}
name[strlen(name) - 1] = '\0';
- PMD_DRV_LOG(INFO, "name = %s\n", name);
+ PMD_DRV_LOG(INFO, "name = %s", name);
if (!strcmp(name, "GTPC"))
new_pctype =
i40e_find_customized_pctype(pf,
@@ -11827,7 +11824,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
continue;
memset(name, 0, sizeof(name));
strcpy(name, proto[n].name);
- PMD_DRV_LOG(INFO, "name = %s\n", name);
+ PMD_DRV_LOG(INFO, "name = %s", name);
if (!strncasecmp(name, "PPPOE", 5))
ptype_mapping[i].sw_ptype |=
RTE_PTYPE_L2_ETHER_PPPOE;
diff --git a/drivers/net/i40e/i40e_pf.c b/drivers/net/i40e/i40e_pf.c
index 15d9ff868f..4a47a8f7ee 100644
--- a/drivers/net/i40e/i40e_pf.c
+++ b/drivers/net/i40e/i40e_pf.c
@@ -1280,17 +1280,17 @@ i40e_pf_host_process_cmd_request_queues(struct i40e_pf_vf *vf, uint8_t *msg)
req_pairs = i40e_align_floor(req_pairs) << 1;
if (req_pairs == 0) {
- PMD_DRV_LOG(ERR, "VF %d tried to request 0 queues. Ignoring.\n",
+ PMD_DRV_LOG(ERR, "VF %d tried to request 0 queues. Ignoring.",
vf->vf_idx);
} else if (req_pairs > I40E_MAX_QP_NUM_PER_VF) {
PMD_DRV_LOG(ERR,
- "VF %d tried to request more than %d queues.\n",
+ "VF %d tried to request more than %d queues.",
vf->vf_idx,
I40E_MAX_QP_NUM_PER_VF);
vfres->num_queue_pairs = I40E_MAX_QP_NUM_PER_VF;
} else if (req_pairs > cur_pairs + pf->qp_pool.num_free) {
PMD_DRV_LOG(ERR, "VF %d requested %d queues (rounded to %d) "
- "but only %d available\n",
+ "but only %d available",
vf->vf_idx,
vfres->num_queue_pairs,
req_pairs,
@@ -1550,7 +1550,7 @@ check:
if (first_cycle && cur_cycle < first_cycle +
(uint64_t)pf->vf_msg_cfg.period * rte_get_timer_hz()) {
PMD_DRV_LOG(WARNING, "VF %u too much messages(%u in %u"
- " seconds),\n\tany new message from which"
+ " seconds), any new message from which"
" will be ignored during next %u seconds!",
vf_id, pf->vf_msg_cfg.max_msg,
(uint32_t)((cur_cycle - first_cycle +
diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c
index 5e693cb1ea..e65e8829d9 100644
--- a/drivers/net/i40e/i40e_rxtx.c
+++ b/drivers/net/i40e/i40e_rxtx.c
@@ -1229,11 +1229,11 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ctx_txd->type_cmd_tso_mss =
rte_cpu_to_le_64(cd_type_cmd_tso_mss);
- PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]:\n"
- "tunneling_params: %#x;\n"
- "l2tag2: %#hx;\n"
- "rsvd: %#hx;\n"
- "type_cmd_tso_mss: %#"PRIx64";\n",
+ PMD_TX_LOG(DEBUG, "mbuf: %p, TCD[%u]: "
+ "tunneling_params: %#x; "
+ "l2tag2: %#hx; "
+ "rsvd: %#hx; "
+ "type_cmd_tso_mss: %#"PRIx64";",
tx_pkt, tx_id,
ctx_txd->tunneling_params,
ctx_txd->l2tag2,
@@ -1276,12 +1276,12 @@ i40e_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txd = &txr[tx_id];
txn = &sw_ring[txe->next_id];
}
- PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]:\n"
- "buf_dma_addr: %#"PRIx64";\n"
- "td_cmd: %#x;\n"
- "td_offset: %#x;\n"
- "td_len: %u;\n"
- "td_tag: %#x;\n",
+ PMD_TX_LOG(DEBUG, "mbuf: %p, TDD[%u]: "
+ "buf_dma_addr: %#"PRIx64"; "
+ "td_cmd: %#x; "
+ "td_offset: %#x; "
+ "td_len: %u; "
+ "td_tag: %#x;",
tx_pkt, tx_id, buf_dma_addr,
td_cmd, td_offset, slen, td_tag);
@@ -3467,7 +3467,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
txq->queue_id);
else
PMD_INIT_LOG(DEBUG,
- "Neither simple nor vector Tx enabled on Tx queue %u\n",
+ "Neither simple nor vector Tx enabled on Tx queue %u",
txq->queue_id);
}
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 54bff05675..9087909ec2 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2301,7 +2301,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
kvlist = rte_kvargs_parse(devargs->args, iavf_valid_args);
if (!kvlist) {
- PMD_INIT_LOG(ERR, "invalid kvargs key\n");
+ PMD_INIT_LOG(ERR, "invalid kvargs key");
return -EINVAL;
}
@@ -2336,7 +2336,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
if (ad->devargs.quanta_size != 0 &&
(ad->devargs.quanta_size < 256 || ad->devargs.quanta_size > 4096 ||
ad->devargs.quanta_size & 0x40)) {
- PMD_INIT_LOG(ERR, "invalid quanta size\n");
+ PMD_INIT_LOG(ERR, "invalid quanta size");
ret = -EINVAL;
goto bail;
}
@@ -2972,12 +2972,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
*/
ret = iavf_check_vf_reset_done(hw);
if (ret) {
- PMD_DRV_LOG(ERR, "Wait too long for reset done!\n");
+ PMD_DRV_LOG(ERR, "Wait too long for reset done!");
return ret;
}
iavf_set_no_poll(adapter, false);
- PMD_DRV_LOG(DEBUG, "Start dev_reset ...\n");
+ PMD_DRV_LOG(DEBUG, "Start dev_reset ...");
ret = iavf_dev_uninit(dev);
if (ret)
return ret;
@@ -3022,7 +3022,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
return;
if (!iavf_is_reset_detected(adapter)) {
- PMD_DRV_LOG(DEBUG, "reset not start\n");
+ PMD_DRV_LOG(DEBUG, "reset not start");
return;
}
@@ -3049,7 +3049,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
goto exit;
error:
- PMD_DRV_LOG(DEBUG, "RESET recover with error code=%d\n", ret);
+ PMD_DRV_LOG(DEBUG, "RESET recover with error code=%dn", ret);
exit:
vf->in_reset_recovery = false;
iavf_set_no_poll(adapter, false);
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index f19aa14646..ec0dffa30e 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -3027,7 +3027,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
up = m->vlan_tci >> IAVF_VLAN_TAG_PCP_OFFSET;
if (!(vf->qos_cap->cap[txq->tc].tc_prio & BIT(up))) {
- PMD_TX_LOG(ERR, "packet with vlan pcp %u cannot transmit in queue %u\n",
+ PMD_TX_LOG(ERR, "packet with vlan pcp %u cannot transmit in queue %u",
up, txq->queue_id);
return -1;
} else {
diff --git a/drivers/net/ice/ice_dcf_ethdev.c b/drivers/net/ice/ice_dcf_ethdev.c
index 5d845bba31..a025b0ea7f 100644
--- a/drivers/net/ice/ice_dcf_ethdev.c
+++ b/drivers/net/ice/ice_dcf_ethdev.c
@@ -1646,7 +1646,7 @@ ice_dcf_init_repr_info(struct ice_dcf_adapter *dcf_adapter)
dcf_adapter->real_hw.num_vfs,
sizeof(dcf_adapter->repr_infos[0]), 0);
if (!dcf_adapter->repr_infos) {
- PMD_DRV_LOG(ERR, "Failed to alloc memory for VF representors\n");
+ PMD_DRV_LOG(ERR, "Failed to alloc memory for VF representors");
return -ENOMEM;
}
@@ -2087,7 +2087,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
}
if (dcf_adapter->real_hw.vf_vsi_map[vf_id] == dcf_vsi_id) {
- PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.\n", vf_id);
+ PMD_DRV_LOG(ERR, "VF ID %u is DCF's ID.", vf_id);
ret = -EINVAL;
break;
}
diff --git a/drivers/net/ice/ice_dcf_vf_representor.c b/drivers/net/ice/ice_dcf_vf_representor.c
index af281f069a..564ff02fd8 100644
--- a/drivers/net/ice/ice_dcf_vf_representor.c
+++ b/drivers/net/ice/ice_dcf_vf_representor.c
@@ -133,7 +133,7 @@ ice_dcf_vf_repr_hw(struct ice_dcf_vf_repr *repr)
struct ice_dcf_adapter *dcf_adapter;
if (!repr->dcf_valid) {
- PMD_DRV_LOG(ERR, "DCF for VF representor has been released\n");
+ PMD_DRV_LOG(ERR, "DCF for VF representor has been released");
return NULL;
}
@@ -272,7 +272,7 @@ ice_dcf_vf_repr_vlan_offload_set(struct rte_eth_dev *dev, int mask)
if (enable && repr->outer_vlan_info.port_vlan_ena) {
PMD_DRV_LOG(ERR,
- "Disable the port VLAN firstly\n");
+ "Disable the port VLAN firstly");
return -EINVAL;
}
@@ -318,7 +318,7 @@ ice_dcf_vf_repr_vlan_pvid_set(struct rte_eth_dev *dev,
if (repr->outer_vlan_info.stripping_ena) {
PMD_DRV_LOG(ERR,
- "Disable the VLAN stripping firstly\n");
+ "Disable the VLAN stripping firstly");
return -EINVAL;
}
@@ -367,7 +367,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
if (vlan_type != RTE_ETH_VLAN_TYPE_OUTER) {
PMD_DRV_LOG(ERR,
- "Can accelerate only outer VLAN in QinQ\n");
+ "Can accelerate only outer VLAN in QinQ");
return -EINVAL;
}
@@ -375,7 +375,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
tpid != RTE_ETHER_TYPE_VLAN &&
tpid != RTE_ETHER_TYPE_QINQ1) {
PMD_DRV_LOG(ERR,
- "Invalid TPID: 0x%04x\n", tpid);
+ "Invalid TPID: 0x%04x", tpid);
return -EINVAL;
}
@@ -387,7 +387,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
true);
if (err) {
PMD_DRV_LOG(ERR,
- "Failed to reset port VLAN : %d\n",
+ "Failed to reset port VLAN : %d",
err);
return err;
}
@@ -398,7 +398,7 @@ ice_dcf_vf_repr_vlan_tpid_set(struct rte_eth_dev *dev,
RTE_ETH_VLAN_STRIP_MASK);
if (err) {
PMD_DRV_LOG(ERR,
- "Failed to reset VLAN stripping : %d\n",
+ "Failed to reset VLAN stripping : %d",
err);
return err;
}
diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c
index c1d2b91ad7..86f43050a5 100644
--- a/drivers/net/ice/ice_ethdev.c
+++ b/drivers/net/ice/ice_ethdev.c
@@ -1867,7 +1867,7 @@ no_dsn:
strncpy(pkg_file, ICE_PKG_FILE_DEFAULT, ICE_MAX_PKG_FILENAME_SIZE);
if (rte_firmware_read(pkg_file, &buf, &bufsz) < 0) {
- PMD_INIT_LOG(ERR, "failed to search file path\n");
+ PMD_INIT_LOG(ERR, "failed to search file path");
return -1;
}
@@ -1876,7 +1876,7 @@ load_fw:
err = ice_copy_and_init_pkg(hw, buf, bufsz);
if (!ice_is_init_pkg_successful(err)) {
- PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d\n", err);
+ PMD_INIT_LOG(ERR, "ice_copy_and_init_hw failed: %d", err);
free(buf);
return -1;
}
@@ -2074,7 +2074,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
kvlist = rte_kvargs_parse(devargs->args, ice_valid_args);
if (kvlist == NULL) {
- PMD_INIT_LOG(ERR, "Invalid kvargs key\n");
+ PMD_INIT_LOG(ERR, "Invalid kvargs key");
return -EINVAL;
}
@@ -2340,20 +2340,20 @@ ice_dev_init(struct rte_eth_dev *dev)
if (pos) {
if (rte_pci_read_config(pci_dev, &dsn_low, 4, pos + 4) < 0 ||
rte_pci_read_config(pci_dev, &dsn_high, 4, pos + 8) < 0) {
- PMD_INIT_LOG(ERR, "Failed to read pci config space\n");
+ PMD_INIT_LOG(ERR, "Failed to read pci config space");
} else {
use_dsn = true;
dsn = (uint64_t)dsn_high << 32 | dsn_low;
}
} else {
- PMD_INIT_LOG(ERR, "Failed to read device serial number\n");
+ PMD_INIT_LOG(ERR, "Failed to read device serial number");
}
ret = ice_load_pkg(pf->adapter, use_dsn, dsn);
if (ret == 0) {
ret = ice_init_hw_tbls(hw);
if (ret) {
- PMD_INIT_LOG(ERR, "ice_init_hw_tbls failed: %d\n", ret);
+ PMD_INIT_LOG(ERR, "ice_init_hw_tbls failed: %d", ret);
rte_free(hw->pkg_copy);
}
}
@@ -2405,14 +2405,14 @@ ice_dev_init(struct rte_eth_dev *dev)
ret = ice_aq_stop_lldp(hw, true, false, NULL);
if (ret != ICE_SUCCESS)
- PMD_INIT_LOG(DEBUG, "lldp has already stopped\n");
+ PMD_INIT_LOG(DEBUG, "lldp has already stopped");
ret = ice_init_dcb(hw, true);
if (ret != ICE_SUCCESS)
- PMD_INIT_LOG(DEBUG, "Failed to init DCB\n");
+ PMD_INIT_LOG(DEBUG, "Failed to init DCB");
/* Forward LLDP packets to default VSI */
ret = ice_vsi_config_sw_lldp(vsi, true);
if (ret != ICE_SUCCESS)
- PMD_INIT_LOG(DEBUG, "Failed to cfg lldp\n");
+ PMD_INIT_LOG(DEBUG, "Failed to cfg lldp");
/* register callback func to eal lib */
rte_intr_callback_register(intr_handle,
ice_interrupt_handler, dev);
@@ -2439,7 +2439,7 @@ ice_dev_init(struct rte_eth_dev *dev)
if (hw->phy_cfg == ICE_PHY_E822) {
ret = ice_start_phy_timer_e822(hw, hw->pf_id, true);
if (ret)
- PMD_INIT_LOG(ERR, "Failed to start phy timer\n");
+ PMD_INIT_LOG(ERR, "Failed to start phy timer");
}
if (!ad->is_safe_mode) {
@@ -2686,7 +2686,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
status = ice_rem_rss_cfg(hw, vsi->idx, cfg);
if (status && status != ICE_ERR_DOES_NOT_EXIST) {
PMD_DRV_LOG(ERR,
- "ice_rem_rss_cfg failed for VSI:%d, error:%d\n",
+ "ice_rem_rss_cfg failed for VSI:%d, error:%d",
vsi->idx, status);
return -EBUSY;
}
@@ -2707,7 +2707,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
status = ice_add_rss_cfg(hw, vsi->idx, cfg);
if (status) {
PMD_DRV_LOG(ERR,
- "ice_add_rss_cfg failed for VSI:%d, error:%d\n",
+ "ice_add_rss_cfg failed for VSI:%d, error:%d",
vsi->idx, status);
return -EBUSY;
}
@@ -3102,7 +3102,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
ret = ice_rem_rss_cfg(hw, vsi_id, cfg);
if (ret && ret != ICE_ERR_DOES_NOT_EXIST)
- PMD_DRV_LOG(ERR, "remove rss cfg failed\n");
+ PMD_DRV_LOG(ERR, "remove rss cfg failed");
ice_rem_rss_cfg_post(pf, cfg->addl_hdrs);
@@ -3118,15 +3118,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
ret = ice_add_rss_cfg_pre(pf, cfg->addl_hdrs);
if (ret)
- PMD_DRV_LOG(ERR, "add rss cfg pre failed\n");
+ PMD_DRV_LOG(ERR, "add rss cfg pre failed");
ret = ice_add_rss_cfg(hw, vsi_id, cfg);
if (ret)
- PMD_DRV_LOG(ERR, "add rss cfg failed\n");
+ PMD_DRV_LOG(ERR, "add rss cfg failed");
ret = ice_add_rss_cfg_post(pf, cfg);
if (ret)
- PMD_DRV_LOG(ERR, "add rss cfg post failed\n");
+ PMD_DRV_LOG(ERR, "add rss cfg post failed");
return 0;
}
@@ -3316,7 +3316,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
if (rss_key_size > sizeof(default_key)) {
PMD_DRV_LOG(WARNING,
"requested size %u is larger than default %zu, "
- "only %zu bytes are gotten for key\n",
+ "only %zu bytes are gotten for key",
rss_key_size, sizeof(default_key),
sizeof(default_key));
}
@@ -3351,12 +3351,12 @@ static int ice_init_rss(struct ice_pf *pf)
if (nb_q == 0) {
PMD_DRV_LOG(WARNING,
- "RSS is not supported as rx queues number is zero\n");
+ "RSS is not supported as rx queues number is zero");
return 0;
}
if (is_safe_mode) {
- PMD_DRV_LOG(WARNING, "RSS is not supported in safe mode\n");
+ PMD_DRV_LOG(WARNING, "RSS is not supported in safe mode");
return 0;
}
@@ -4202,7 +4202,7 @@ ice_phy_conf_link(struct ice_hw *hw,
cfg.phy_type_low = phy_type_low & phy_caps->phy_type_low;
cfg.phy_type_high = phy_type_high & phy_caps->phy_type_high;
} else {
- PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!\n");
+ PMD_DRV_LOG(WARNING, "Invalid speed setting, set to default!");
cfg.phy_type_low = phy_caps->phy_type_low;
cfg.phy_type_high = phy_caps->phy_type_high;
}
@@ -5657,7 +5657,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
}
break;
default:
- PMD_DRV_LOG(WARNING, "SFF Module Type not recognized.\n");
+ PMD_DRV_LOG(WARNING, "SFF Module Type not recognized.");
return -EINVAL;
}
return 0;
@@ -5728,7 +5728,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
0, NULL);
PMD_DRV_LOG(DEBUG, "SFF %02X %02X %02X %X = "
"%02X%02X%02X%02X."
- "%02X%02X%02X%02X (%X)\n",
+ "%02X%02X%02X%02X (%X)",
addr, offset, page, is_sfp,
value[0], value[1],
value[2], value[3],
diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c
index 0b7920ad44..dd9130ace3 100644
--- a/drivers/net/ice/ice_fdir_filter.c
+++ b/drivers/net/ice/ice_fdir_filter.c
@@ -334,7 +334,7 @@ ice_fdir_counter_alloc(struct ice_pf *pf, uint32_t shared, uint32_t id)
}
if (!counter_free) {
- PMD_DRV_LOG(ERR, "No free counter found\n");
+ PMD_DRV_LOG(ERR, "No free counter found");
return NULL;
}
diff --git a/drivers/net/ice/ice_hash.c b/drivers/net/ice/ice_hash.c
index d8c46347d2..dad117679d 100644
--- a/drivers/net/ice/ice_hash.c
+++ b/drivers/net/ice/ice_hash.c
@@ -1242,13 +1242,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
ice_get_hw_vsi_num(hw, vsi_handle),
id);
if (ret) {
- PMD_DRV_LOG(ERR, "remove RSS flow failed\n");
+ PMD_DRV_LOG(ERR, "remove RSS flow failed");
return ret;
}
ret = ice_rem_prof(hw, ICE_BLK_RSS, id);
if (ret) {
- PMD_DRV_LOG(ERR, "remove RSS profile failed\n");
+ PMD_DRV_LOG(ERR, "remove RSS profile failed");
return ret;
}
}
@@ -1256,7 +1256,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
/* add new profile */
ret = ice_flow_set_hw_prof(hw, vsi_handle, 0, prof, ICE_BLK_RSS);
if (ret) {
- PMD_DRV_LOG(ERR, "HW profile add failed\n");
+ PMD_DRV_LOG(ERR, "HW profile add failed");
return ret;
}
@@ -1378,7 +1378,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
return 0;
err:
- PMD_DRV_LOG(ERR, "HW profile remove failed\n");
+ PMD_DRV_LOG(ERR, "HW profile remove failed");
return ret;
}
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index dea6a5b535..7da314217a 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -2822,7 +2822,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
if (!(txd[desc_to_clean_to].cmd_type_offset_bsz &
rte_cpu_to_le_64(ICE_TX_DESC_DTYPE_DESC_DONE))) {
PMD_TX_LOG(DEBUG, "TX descriptor %4u is not done "
- "(port=%d queue=%d) value=0x%"PRIx64"\n",
+ "(port=%d queue=%d) value=0x%"PRIx64,
desc_to_clean_to,
txq->port_id, txq->queue_id,
txd[desc_to_clean_to].cmd_type_offset_bsz);
diff --git a/drivers/net/ipn3ke/ipn3ke_ethdev.c b/drivers/net/ipn3ke/ipn3ke_ethdev.c
index 2c15611a23..baae80d661 100644
--- a/drivers/net/ipn3ke/ipn3ke_ethdev.c
+++ b/drivers/net/ipn3ke/ipn3ke_ethdev.c
@@ -203,7 +203,7 @@ ipn3ke_vbng_init_done(struct ipn3ke_hw *hw)
}
if (!timeout) {
- IPN3KE_AFU_PMD_ERR("IPN3KE vBNG INIT timeout.\n");
+ IPN3KE_AFU_PMD_ERR("IPN3KE vBNG INIT timeout.");
return -1;
}
@@ -348,7 +348,7 @@ ipn3ke_hw_init(struct rte_afu_device *afu_dev,
hw->acc_tm = 1;
hw->acc_flow = 1;
- IPN3KE_AFU_PMD_DEBUG("UPL_version is 0x%x\n",
+ IPN3KE_AFU_PMD_DEBUG("UPL_version is 0x%x",
IPN3KE_READ_REG(hw, 0));
}
diff --git a/drivers/net/ipn3ke/ipn3ke_flow.c b/drivers/net/ipn3ke/ipn3ke_flow.c
index d20a29b9a2..a2f76268b5 100644
--- a/drivers/net/ipn3ke/ipn3ke_flow.c
+++ b/drivers/net/ipn3ke/ipn3ke_flow.c
@@ -993,7 +993,7 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
uint32_t time_out = MHL_COMMAND_TIME_COUNT;
uint32_t i;
- IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump start\n");
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump start");
pdata = (uint32_t *)flow->rule.key;
IPN3KE_AFU_PMD_DEBUG(" - key :");
@@ -1003,7 +1003,6 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
for (i = 0; i < 4; i++)
IPN3KE_AFU_PMD_DEBUG(" %02x", ipn3ke_swap32(pdata[3 - i]));
- IPN3KE_AFU_PMD_DEBUG("\n");
pdata = (uint32_t *)flow->rule.result;
IPN3KE_AFU_PMD_DEBUG(" - result:");
@@ -1013,7 +1012,7 @@ ipn3ke_flow_hw_update(struct ipn3ke_hw *hw,
for (i = 0; i < 1; i++)
IPN3KE_AFU_PMD_DEBUG(" %02x", pdata[i]);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump end\n");
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE flow dump end");
pdata = (uint32_t *)flow->rule.key;
@@ -1254,7 +1253,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_RX_TEST,
0,
0x1);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_TEST: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_TEST: %x", data);
/* configure base mac address */
IPN3KE_MASK_WRITE_REG(hw,
@@ -1268,7 +1267,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_BASE_DST_MAC_ADDR_HI,
0,
0xFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_HI: %x", data);
IPN3KE_MASK_WRITE_REG(hw,
IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW,
@@ -1281,7 +1280,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW,
0,
0xFFFFFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_BASE_DST_MAC_ADDR_LOW: %x", data);
/* configure hash lookup rules enable */
@@ -1296,7 +1295,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_LKUP_ENABLE,
0,
0xFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_LKUP_ENABLE: %x", data);
/* configure rx parse config, settings associated with VxLAN */
@@ -1311,7 +1310,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_RX_PARSE_CFG,
0,
0x3FFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_PARSE_CFG: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_RX_PARSE_CFG: %x", data);
/* configure QinQ S-Tag */
@@ -1326,7 +1325,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_QINQ_STAG,
0,
0xFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_QINQ_STAG: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_QINQ_STAG: %x", data);
/* configure gen ctrl */
@@ -1341,7 +1340,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_MHL_GEN_CTRL,
0,
0x1F);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_GEN_CTRL: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_GEN_CTRL: %x", data);
/* clear monitoring register */
@@ -1356,7 +1355,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_MHL_MON_0,
0,
0xFFFFFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_MON_0: %x\n", data);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_MHL_MON_0: %x", data);
ipn3ke_flow_hw_flush(hw);
@@ -1366,7 +1365,7 @@ int ipn3ke_flow_init(void *dev)
IPN3KE_CLF_EM_NUM,
0,
0xFFFFFFFF);
- IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_EN_NUM: %x\n", hw->flow_max_entries);
+ IPN3KE_AFU_PMD_DEBUG("IPN3KE_CLF_EN_NUM: %x", hw->flow_max_entries);
hw->flow_num_entries = 0;
return 0;
diff --git a/drivers/net/ipn3ke/ipn3ke_representor.c b/drivers/net/ipn3ke/ipn3ke_representor.c
index 8145f1bb2a..feb57420c3 100644
--- a/drivers/net/ipn3ke/ipn3ke_representor.c
+++ b/drivers/net/ipn3ke/ipn3ke_representor.c
@@ -2401,8 +2401,8 @@ ipn3ke_update_link(struct rte_rawdev *rawdev,
else
link->link_status = 0;
- IPN3KE_AFU_PMD_DEBUG("port is %d\n", port);
- IPN3KE_AFU_PMD_DEBUG("link->link_status is %d\n", link->link_status);
+ IPN3KE_AFU_PMD_DEBUG("port is %d", port);
+ IPN3KE_AFU_PMD_DEBUG("link->link_status is %d", link->link_status);
rawdev->dev_ops->attr_get(rawdev,
"LineSideLinkSpeed",
@@ -2479,14 +2479,14 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
if (!rpst->ori_linfo.link_status &&
link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Up\n", rpst->port_id);
+ IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Up", rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
rte_eth_linkstatus_set(ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Up\n",
+ IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Up",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
@@ -2494,7 +2494,7 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
}
} else if (rpst->ori_linfo.link_status &&
!link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Down\n",
+ IPN3KE_AFU_PMD_DEBUG("Update Rpst %d Down",
rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
@@ -2502,7 +2502,7 @@ ipn3ke_rpst_link_update(struct rte_eth_dev *ethdev,
rte_eth_linkstatus_set(ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Down\n",
+ IPN3KE_AFU_PMD_DEBUG("Update FVL PF %d Down",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
@@ -2537,14 +2537,14 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
if (!rpst->ori_linfo.link_status &&
link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Up\n", rpst->port_id);
+ IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Up", rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
rte_eth_linkstatus_set(rpst->ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Up\n",
+ IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Up",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_up(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
@@ -2552,14 +2552,14 @@ ipn3ke_rpst_link_check(struct ipn3ke_rpst *rpst)
}
} else if (rpst->ori_linfo.link_status &&
!link.link_status) {
- IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Down\n", rpst->port_id);
+ IPN3KE_AFU_PMD_DEBUG("Check Rpst %d Down", rpst->port_id);
rpst->ori_linfo.link_status = link.link_status;
rpst->ori_linfo.link_speed = link.link_speed;
rte_eth_linkstatus_set(rpst->ethdev, &link);
if (rpst->i40e_pf_eth) {
- IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Down\n",
+ IPN3KE_AFU_PMD_DEBUG("Check FVL PF %d Down",
rpst->i40e_pf_eth_port_id);
rte_eth_dev_set_link_down(rpst->i40e_pf_eth_port_id);
pf = rpst->i40e_pf_eth;
diff --git a/drivers/net/ipn3ke/ipn3ke_tm.c b/drivers/net/ipn3ke/ipn3ke_tm.c
index 0260227900..44a8b88699 100644
--- a/drivers/net/ipn3ke/ipn3ke_tm.c
+++ b/drivers/net/ipn3ke/ipn3ke_tm.c
@@ -1934,10 +1934,10 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)
tm_id = tm->tm_id;
- IPN3KE_AFU_PMD_DEBUG("***HQoS Tree(%d)***\n", tm_id);
+ IPN3KE_AFU_PMD_DEBUG("***HQoS Tree(%d)***", tm_id);
port_n = tm->h.port_node;
- IPN3KE_AFU_PMD_DEBUG("Port: (%d|%s)\n", port_n->node_index,
+ IPN3KE_AFU_PMD_DEBUG("Port: (%d|%s)", port_n->node_index,
str_state[port_n->node_state]);
vt_nl = &tm->h.port_node->children_node_list;
@@ -1951,7 +1951,6 @@ ipn3ke_tm_show(struct rte_eth_dev *dev)
cos_n->node_index,
str_state[cos_n->node_state]);
}
- IPN3KE_AFU_PMD_DEBUG("\n");
}
}
@@ -1969,14 +1968,13 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
tm_id = tm->tm_id;
- IPN3KE_AFU_PMD_DEBUG("***Commit Tree(%d)***\n", tm_id);
+ IPN3KE_AFU_PMD_DEBUG("***Commit Tree(%d)***", tm_id);
n = tm->h.port_commit_node;
IPN3KE_AFU_PMD_DEBUG("Port: ");
if (n)
IPN3KE_AFU_PMD_DEBUG("(%d|%s)",
n->node_index,
str_state[n->node_state]);
- IPN3KE_AFU_PMD_DEBUG("\n");
nl = &tm->h.vt_commit_node_list;
IPN3KE_AFU_PMD_DEBUG("VT : ");
@@ -1985,7 +1983,6 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
n->node_index,
str_state[n->node_state]);
}
- IPN3KE_AFU_PMD_DEBUG("\n");
nl = &tm->h.cos_commit_node_list;
IPN3KE_AFU_PMD_DEBUG("COS : ");
@@ -1994,7 +1991,6 @@ ipn3ke_tm_show_commmit(struct rte_eth_dev *dev)
n->node_index,
str_state[n->node_state]);
}
- IPN3KE_AFU_PMD_DEBUG("\n");
}
/* Traffic manager hierarchy commit */
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index a44497ce51..3ac65ca3b3 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -1154,10 +1154,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
}
if (hw->mac.ops.fw_recovery_mode && hw->mac.ops.fw_recovery_mode(hw)) {
- PMD_INIT_LOG(ERR, "\nERROR: "
- "Firmware recovery mode detected. Limiting functionality.\n"
- "Refer to the Intel(R) Ethernet Adapters and Devices "
- "User Guide for details on firmware recovery mode.");
+ PMD_INIT_LOG(ERR, "ERROR: Firmware recovery mode detected. Limiting functionality.");
return -EIO;
}
@@ -1782,7 +1779,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
if (eth_da.nb_representor_ports > 0 &&
eth_da.type != RTE_ETH_REPRESENTOR_VF) {
- PMD_DRV_LOG(ERR, "unsupported representor type: %s\n",
+ PMD_DRV_LOG(ERR, "unsupported representor type: %s",
pci_dev->device.devargs->args);
return -ENOTSUP;
}
diff --git a/drivers/net/ixgbe/ixgbe_ipsec.c b/drivers/net/ixgbe/ixgbe_ipsec.c
index d331308556..3a666ba15f 100644
--- a/drivers/net/ixgbe/ixgbe_ipsec.c
+++ b/drivers/net/ixgbe/ixgbe_ipsec.c
@@ -120,7 +120,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
/* Fail if no match and no free entries*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx IP table\n");
+ "No free entry left in the Rx IP table");
return -1;
}
@@ -134,7 +134,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx SA table\n");
+ "No free entry left in the Rx SA table");
return -1;
}
@@ -232,7 +232,7 @@ ixgbe_crypto_add_sa(struct ixgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Tx SA table\n");
+ "No free entry left in the Tx SA table");
return -1;
}
@@ -291,7 +291,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx IP table\n");
+ "Entry not found in the Rx IP table");
return -1;
}
@@ -306,7 +306,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx SA table\n");
+ "Entry not found in the Rx SA table");
return -1;
}
@@ -349,7 +349,7 @@ ixgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Tx SA table\n");
+ "Entry not found in the Tx SA table");
return -1;
}
reg_val = IPSRXIDX_WRITE | (sa_index << 3);
@@ -379,7 +379,7 @@ ixgbe_crypto_create_session(void *device,
if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
conf->crypto_xform->aead.algo !=
RTE_CRYPTO_AEAD_AES_GCM) {
- PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode");
return -ENOTSUP;
}
aead_xform = &conf->crypto_xform->aead;
@@ -388,14 +388,14 @@ ixgbe_crypto_create_session(void *device,
if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled");
return -ENOTSUP;
}
} else {
if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = IXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled");
return -ENOTSUP;
}
}
@@ -409,7 +409,7 @@ ixgbe_crypto_create_session(void *device,
if (ic_session->op == IXGBE_OP_AUTHENTICATED_ENCRYPTION) {
if (ixgbe_crypto_add_sa(ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ PMD_DRV_LOG(ERR, "Failed to add SA");
return -EPERM;
}
}
@@ -431,12 +431,12 @@ ixgbe_crypto_remove_session(void *device,
struct ixgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);
if (eth_dev != ic_session->dev) {
- PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ PMD_DRV_LOG(ERR, "Session not bound to this device");
return -ENODEV;
}
if (ixgbe_crypto_remove_sa(eth_dev, ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ PMD_DRV_LOG(ERR, "Failed to remove session");
return -EFAULT;
}
diff --git a/drivers/net/ixgbe/ixgbe_pf.c b/drivers/net/ixgbe/ixgbe_pf.c
index 0a0f639e39..002bc71c2a 100644
--- a/drivers/net/ixgbe/ixgbe_pf.c
+++ b/drivers/net/ixgbe/ixgbe_pf.c
@@ -171,14 +171,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
struct ixgbe_ethertype_filter ethertype_filter;
if (!hw->mac.ops.set_ethertype_anti_spoofing) {
- PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.\n");
+ PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.");
return;
}
i = ixgbe_ethertype_filter_lookup(filter_info,
IXGBE_ETHERTYPE_FLOW_CTRL);
if (i >= 0) {
- PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!\n");
+ PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!");
return;
}
@@ -191,7 +191,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
i = ixgbe_ethertype_filter_insert(filter_info,
ðertype_filter);
if (i < 0) {
- PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.\n");
+ PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.");
return;
}
@@ -422,7 +422,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
vmolr = IXGBE_READ_REG(hw, IXGBE_VMOLR(vf));
- PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);
vmolr &= ~IXGBE_VMOLR_MPE;
@@ -628,7 +628,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
break;
}
- PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
api_version, vf);
return -1;
@@ -677,7 +677,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
case RTE_ETH_MQ_TX_NONE:
case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
- ", but its tx mode = %d\n", vf,
+ ", but its tx mode = %d", vf,
eth_conf->txmode.mq_mode);
return -1;
@@ -711,7 +711,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
break;
default:
- PMD_DRV_LOG(ERR, "PF work with invalid mode = %d\n",
+ PMD_DRV_LOG(ERR, "PF work with invalid mode = %d",
eth_conf->txmode.mq_mode);
return -1;
}
@@ -767,7 +767,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (!(fctrl & IXGBE_FCTRL_UPE)) {
/* VF promisc requires PF in promisc */
PMD_DRV_LOG(ERR,
- "Enabling VF promisc requires PF in promisc\n");
+ "Enabling VF promisc requires PF in promisc");
return -1;
}
@@ -804,7 +804,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (index) {
if (!rte_is_valid_assigned_ether_addr(
(struct rte_ether_addr *)new_mac)) {
- PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
return -1;
}
diff --git a/drivers/net/ixgbe/rte_pmd_ixgbe.c b/drivers/net/ixgbe/rte_pmd_ixgbe.c
index f76ef63921..15c28e7a3f 100644
--- a/drivers/net/ixgbe/rte_pmd_ixgbe.c
+++ b/drivers/net/ixgbe/rte_pmd_ixgbe.c
@@ -955,7 +955,7 @@ STATIC s32 rte_pmd_ixgbe_acquire_swfw(struct ixgbe_hw *hw, u32 mask)
while (--retries) {
status = ixgbe_acquire_swfw_semaphore(hw, mask);
if (status) {
- PMD_DRV_LOG(ERR, "Get SWFW sem failed, Status = %d\n",
+ PMD_DRV_LOG(ERR, "Get SWFW sem failed, Status = %d",
status);
return status;
}
@@ -964,18 +964,18 @@ STATIC s32 rte_pmd_ixgbe_acquire_swfw(struct ixgbe_hw *hw, u32 mask)
return IXGBE_SUCCESS;
if (status == IXGBE_ERR_TOKEN_RETRY)
- PMD_DRV_LOG(ERR, "Get PHY token failed, Status = %d\n",
+ PMD_DRV_LOG(ERR, "Get PHY token failed, Status = %d",
status);
ixgbe_release_swfw_semaphore(hw, mask);
if (status != IXGBE_ERR_TOKEN_RETRY) {
PMD_DRV_LOG(ERR,
- "Retry get PHY token failed, Status=%d\n",
+ "Retry get PHY token failed, Status=%d",
status);
return status;
}
}
- PMD_DRV_LOG(ERR, "swfw acquisition retries failed!: PHY ID = 0x%08X\n",
+ PMD_DRV_LOG(ERR, "swfw acquisition retries failed!: PHY ID = 0x%08X",
hw->phy.id);
return status;
}
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index 18377d9caf..f05f4c24df 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -1292,7 +1292,7 @@ memif_connect(struct rte_eth_dev *dev)
PROT_READ | PROT_WRITE,
MAP_SHARED, mr->fd, 0);
if (mr->addr == MAP_FAILED) {
- MIF_LOG(ERR, "mmap failed: %s\n",
+ MIF_LOG(ERR, "mmap failed: %s",
strerror(errno));
return -1;
}
diff --git a/drivers/net/mlx4/mlx4.c b/drivers/net/mlx4/mlx4.c
index a1a7e93288..7c0ac6888b 100644
--- a/drivers/net/mlx4/mlx4.c
+++ b/drivers/net/mlx4/mlx4.c
@@ -106,7 +106,7 @@ mlx4_init_shared_data(void)
sizeof(*mlx4_shared_data),
SOCKET_ID_ANY, 0);
if (mz == NULL) {
- ERROR("Cannot allocate mlx4 shared data\n");
+ ERROR("Cannot allocate mlx4 shared data");
ret = -rte_errno;
goto error;
}
@@ -117,7 +117,7 @@ mlx4_init_shared_data(void)
/* Lookup allocated shared memory. */
mz = rte_memzone_lookup(MZ_MLX4_PMD_SHARED_DATA);
if (mz == NULL) {
- ERROR("Cannot attach mlx4 shared data\n");
+ ERROR("Cannot attach mlx4 shared data");
ret = -rte_errno;
goto error;
}
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 9bf1ec5509..297ff3fb31 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -257,7 +257,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
if (tx_free_thresh + 3 >= nb_desc) {
PMD_INIT_LOG(ERR,
"tx_free_thresh must be less than the number of TX entries minus 3(%u)."
- " (tx_free_thresh=%u port=%u queue=%u)\n",
+ " (tx_free_thresh=%u port=%u queue=%u)",
nb_desc - 3,
tx_free_thresh, dev->data->port_id, queue_idx);
return -EINVAL;
@@ -902,7 +902,7 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv,
if (!rxq->rxbuf_info) {
PMD_DRV_LOG(ERR,
- "Could not allocate rxbuf info for queue %d\n",
+ "Could not allocate rxbuf info for queue %d",
queue_id);
rte_free(rxq->event_buf);
rte_free(rxq);
diff --git a/drivers/net/ngbe/base/ngbe_hw.c b/drivers/net/ngbe/base/ngbe_hw.c
index 4dced0d328..68b0a8b8ab 100644
--- a/drivers/net/ngbe/base/ngbe_hw.c
+++ b/drivers/net/ngbe/base/ngbe_hw.c
@@ -1067,7 +1067,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
u32 i;
if (rte_pci_set_bus_master(pci_dev, enable) < 0) {
- DEBUGOUT("Cannot configure PCI bus master\n");
+ DEBUGOUT("Cannot configure PCI bus master");
return -1;
}
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index fb86e7b10d..4321924cb9 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -381,7 +381,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
ssid = ngbe_flash_read_dword(hw, 0xFFFDC);
if (ssid == 0x1) {
PMD_INIT_LOG(ERR,
- "Read of internal subsystem device id failed\n");
+ "Read of internal subsystem device id failed");
return -ENODEV;
}
hw->sub_system_id = (u16)ssid >> 8 | (u16)ssid << 8;
diff --git a/drivers/net/ngbe/ngbe_pf.c b/drivers/net/ngbe/ngbe_pf.c
index 947ae7fe94..bb62e2fbb7 100644
--- a/drivers/net/ngbe/ngbe_pf.c
+++ b/drivers/net/ngbe/ngbe_pf.c
@@ -71,7 +71,7 @@ int ngbe_pf_host_init(struct rte_eth_dev *eth_dev)
sizeof(struct ngbe_vf_info) * vf_num, 0);
if (*vfinfo == NULL) {
PMD_INIT_LOG(ERR,
- "Cannot allocate memory for private VF data\n");
+ "Cannot allocate memory for private VF data");
return -ENOMEM;
}
@@ -320,7 +320,7 @@ ngbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)
vmolr = rd32(hw, NGBE_POOLETHCTL(vf));
- PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);
vmolr &= ~NGBE_POOLETHCTL_MCP;
@@ -482,7 +482,7 @@ ngbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
break;
}
- PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
api_version, vf);
return -1;
@@ -564,7 +564,7 @@ ngbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
if (!(fctrl & NGBE_PSRCTL_UCP)) {
/* VF promisc requires PF in promisc */
PMD_DRV_LOG(ERR,
- "Enabling VF promisc requires PF in promisc\n");
+ "Enabling VF promisc requires PF in promisc");
return -1;
}
@@ -601,7 +601,7 @@ ngbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (index) {
if (!rte_is_valid_assigned_ether_addr(ea)) {
- PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
return -1;
}
diff --git a/drivers/net/octeon_ep/cnxk_ep_tx.c b/drivers/net/octeon_ep/cnxk_ep_tx.c
index 9f11a2f317..8628edf8a7 100644
--- a/drivers/net/octeon_ep/cnxk_ep_tx.c
+++ b/drivers/net/octeon_ep/cnxk_ep_tx.c
@@ -139,7 +139,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
num_sg = (frags + mask) / OTX_EP_NUM_SG_PTRS;
if (unlikely(pkt_len > OTX_EP_MAX_PKT_SZ && num_sg > OTX_EP_MAX_SG_LISTS)) {
- otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments\n");
+ otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments");
goto exit;
}
diff --git a/drivers/net/octeon_ep/cnxk_ep_vf.c b/drivers/net/octeon_ep/cnxk_ep_vf.c
index ef275703c3..74b63a161f 100644
--- a/drivers/net/octeon_ep/cnxk_ep_vf.c
+++ b/drivers/net/octeon_ep/cnxk_ep_vf.c
@@ -102,7 +102,7 @@ cnxk_ep_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
}
if (loop < 0) {
- otx_ep_err("IDLE bit is not set\n");
+ otx_ep_err("IDLE bit is not set");
return -EIO;
}
@@ -134,7 +134,7 @@ cnxk_ep_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
} while (reg_val != 0 && loop--);
if (loop < 0) {
- otx_ep_err("INST CNT REGISTER is not zero\n");
+ otx_ep_err("INST CNT REGISTER is not zero");
return -EIO;
}
@@ -181,7 +181,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("OUT CNT REGISTER value is zero\n");
+ otx_ep_err("OUT CNT REGISTER value is zero");
return -EIO;
}
@@ -217,7 +217,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("Packets credit register value is not cleared\n");
+ otx_ep_err("Packets credit register value is not cleared");
return -EIO;
}
@@ -250,7 +250,7 @@ cnxk_ep_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("Packets sent register value is not cleared\n");
+ otx_ep_err("Packets sent register value is not cleared");
return -EIO;
}
@@ -280,7 +280,7 @@ cnxk_ep_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
}
if (loop < 0) {
- otx_ep_err("INSTR DBELL not coming back to 0\n");
+ otx_ep_err("INSTR DBELL not coming back to 0");
return -EIO;
}
diff --git a/drivers/net/octeon_ep/otx2_ep_vf.c b/drivers/net/octeon_ep/otx2_ep_vf.c
index 7f4edf8dcf..fdab542246 100644
--- a/drivers/net/octeon_ep/otx2_ep_vf.c
+++ b/drivers/net/octeon_ep/otx2_ep_vf.c
@@ -37,7 +37,7 @@ otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no)
SDP_VF_R_IN_INSTR_DBELL(q_no));
}
if (loop < 0) {
- otx_ep_err("%s: doorbell init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: doorbell init retry limit exceeded.", __func__);
return -EIO;
}
@@ -48,7 +48,7 @@ otx2_vf_reset_iq(struct otx_ep_device *otx_ep, int q_no)
rte_delay_ms(1);
} while ((d64 & ~SDP_VF_R_IN_CNTS_OUT_INT) != 0 && loop--);
if (loop < 0) {
- otx_ep_err("%s: in_cnts init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: in_cnts init retry limit exceeded.", __func__);
return -EIO;
}
@@ -81,7 +81,7 @@ otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no)
SDP_VF_R_OUT_SLIST_DBELL(q_no));
}
if (loop < 0) {
- otx_ep_err("%s: doorbell init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: doorbell init retry limit exceeded.", __func__);
return -EIO;
}
@@ -109,7 +109,7 @@ otx2_vf_reset_oq(struct otx_ep_device *otx_ep, int q_no)
rte_delay_ms(1);
} while ((d64 & ~SDP_VF_R_OUT_CNTS_IN_INT) != 0 && loop--);
if (loop < 0) {
- otx_ep_err("%s: out_cnts init retry limit exceeded.\n", __func__);
+ otx_ep_err("%s: out_cnts init retry limit exceeded.", __func__);
return -EIO;
}
@@ -252,7 +252,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
}
if (loop < 0) {
- otx_ep_err("IDLE bit is not set\n");
+ otx_ep_err("IDLE bit is not set");
return -EIO;
}
@@ -283,7 +283,7 @@ otx2_vf_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
} while (reg_val != 0 && loop--);
if (loop < 0) {
- otx_ep_err("INST CNT REGISTER is not zero\n");
+ otx_ep_err("INST CNT REGISTER is not zero");
return -EIO;
}
@@ -332,7 +332,7 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("OUT CNT REGISTER value is zero\n");
+ otx_ep_err("OUT CNT REGISTER value is zero");
return -EIO;
}
@@ -368,7 +368,7 @@ otx2_vf_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0) {
- otx_ep_err("Packets credit register value is not cleared\n");
+ otx_ep_err("Packets credit register value is not cleared");
return -EIO;
}
otx_ep_dbg("SDP_R[%d]_credit:%x", oq_no, rte_read32(droq->pkts_credit_reg));
@@ -425,7 +425,7 @@ otx2_vf_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
}
if (loop < 0) {
- otx_ep_err("INSTR DBELL not coming back to 0\n");
+ otx_ep_err("INSTR DBELL not coming back to 0");
return -EIO;
}
diff --git a/drivers/net/octeon_ep/otx_ep_common.h b/drivers/net/octeon_ep/otx_ep_common.h
index 82e57520d3..938c51b35d 100644
--- a/drivers/net/octeon_ep/otx_ep_common.h
+++ b/drivers/net/octeon_ep/otx_ep_common.h
@@ -119,7 +119,7 @@ union otx_ep_instr_irh {
{\
typeof(value) val = (value); \
typeof(reg_off) off = (reg_off); \
- otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx\n", \
+ otx_ep_dbg("octeon_write_csr64: reg: 0x%08lx val: 0x%016llx", \
(unsigned long)off, (unsigned long long)val); \
rte_write64(val, ((base_addr) + off)); \
}
diff --git a/drivers/net/octeon_ep/otx_ep_ethdev.c b/drivers/net/octeon_ep/otx_ep_ethdev.c
index 615cbbb648..c0298a56ac 100644
--- a/drivers/net/octeon_ep/otx_ep_ethdev.c
+++ b/drivers/net/octeon_ep/otx_ep_ethdev.c
@@ -118,7 +118,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
ret = otx_ep_mbox_get_link_info(eth_dev, &link);
if (ret)
return -EINVAL;
- otx_ep_dbg("link status resp link %d duplex %d autoneg %d link_speed %d\n",
+ otx_ep_dbg("link status resp link %d duplex %d autoneg %d link_speed %d",
link.link_status, link.link_duplex, link.link_autoneg, link.link_speed);
return rte_eth_linkstatus_set(eth_dev, &link);
}
@@ -163,7 +163,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
ret = otx_ep_mbox_set_mac_addr(eth_dev, mac_addr);
if (ret)
return -EINVAL;
- otx_ep_dbg("Default MAC address " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Default MAC address " RTE_ETHER_ADDR_PRT_FMT "",
RTE_ETHER_ADDR_BYTES(mac_addr));
rte_ether_addr_copy(mac_addr, eth_dev->data->mac_addrs);
return 0;
@@ -180,7 +180,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
/* Enable IQ/OQ for this device */
ret = otx_epvf->fn_list.enable_io_queues(otx_epvf);
if (ret) {
- otx_ep_err("IOQ enable failed\n");
+ otx_ep_err("IOQ enable failed");
return ret;
}
@@ -189,7 +189,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
otx_epvf->droq[q]->pkts_credit_reg);
rte_wmb();
- otx_ep_info("OQ[%d] dbells [%d]\n", q,
+ otx_ep_info("OQ[%d] dbells [%d]", q,
rte_read32(otx_epvf->droq[q]->pkts_credit_reg));
}
@@ -198,7 +198,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
otx_ep_set_tx_func(eth_dev);
otx_ep_set_rx_func(eth_dev);
- otx_ep_info("dev started\n");
+ otx_ep_info("dev started");
for (q = 0; q < eth_dev->data->nb_rx_queues; q++)
eth_dev->data->rx_queue_state[q] = RTE_ETH_QUEUE_STATE_STARTED;
@@ -241,7 +241,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
/* Same DMA buffer is shared by OQ and IQ, clear it at start */
memset(otx_epvf->ism_buffer_mz->addr, 0, OTX_EP_ISM_BUFFER_SIZE);
if (otx_epvf->ism_buffer_mz == NULL) {
- otx_ep_err("Failed to allocate ISM buffer\n");
+ otx_ep_err("Failed to allocate ISM buffer");
return(-1);
}
otx_ep_dbg("ISM: virt: 0x%p, dma: 0x%" PRIX64,
@@ -285,12 +285,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
ret = -EINVAL;
break;
default:
- otx_ep_err("Unsupported device\n");
+ otx_ep_err("Unsupported device");
ret = -EINVAL;
}
if (!ret)
- otx_ep_info("OTX_EP dev_id[%d]\n", dev_id);
+ otx_ep_info("OTX_EP dev_id[%d]", dev_id);
return ret;
}
@@ -304,7 +304,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
ret = otx_ep_chip_specific_setup(otx_epvf);
if (ret) {
- otx_ep_err("Chip specific setup failed\n");
+ otx_ep_err("Chip specific setup failed");
goto setup_fail;
}
@@ -328,7 +328,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->eth_dev->rx_pkt_burst = &cnxk_ep_recv_pkts;
otx_epvf->chip_gen = OTX_EP_CN10XX;
} else {
- otx_ep_err("Invalid chip_id\n");
+ otx_ep_err("Invalid chip_id");
ret = -EINVAL;
goto setup_fail;
}
@@ -336,7 +336,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
otx_epvf->max_rx_queues = ethdev_queues;
otx_epvf->max_tx_queues = ethdev_queues;
- otx_ep_info("OTX_EP Device is Ready\n");
+ otx_ep_info("OTX_EP Device is Ready");
setup_fail:
return ret;
@@ -356,10 +356,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
txmode = &conf->txmode;
if (eth_dev->data->nb_rx_queues > otx_epvf->max_rx_queues ||
eth_dev->data->nb_tx_queues > otx_epvf->max_tx_queues) {
- otx_ep_err("invalid num queues\n");
+ otx_ep_err("invalid num queues");
return -EINVAL;
}
- otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d\n",
+ otx_ep_info("OTX_EP Device is configured with num_txq %d num_rxq %d",
eth_dev->data->nb_rx_queues, eth_dev->data->nb_tx_queues);
otx_epvf->rx_offloads = rxmode->offloads;
@@ -403,29 +403,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
uint16_t buf_size;
if (q_no >= otx_epvf->max_rx_queues) {
- otx_ep_err("Invalid rx queue number %u\n", q_no);
+ otx_ep_err("Invalid rx queue number %u", q_no);
return -EINVAL;
}
if (num_rx_descs & (num_rx_descs - 1)) {
- otx_ep_err("Invalid rx desc number should be pow 2 %u\n",
+ otx_ep_err("Invalid rx desc number should be pow 2 %u",
num_rx_descs);
return -EINVAL;
}
if (num_rx_descs < (SDP_GBL_WMARK * 8)) {
- otx_ep_err("Invalid rx desc number(%u) should at least be greater than 8xwmark %u\n",
+ otx_ep_err("Invalid rx desc number(%u) should at least be greater than 8xwmark %u",
num_rx_descs, (SDP_GBL_WMARK * 8));
return -EINVAL;
}
- otx_ep_dbg("setting up rx queue %u\n", q_no);
+ otx_ep_dbg("setting up rx queue %u", q_no);
mbp_priv = rte_mempool_get_priv(mp);
buf_size = mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM;
if (otx_ep_setup_oqs(otx_epvf, q_no, num_rx_descs, buf_size, mp,
socket_id)) {
- otx_ep_err("droq allocation failed\n");
+ otx_ep_err("droq allocation failed");
return -1;
}
@@ -454,7 +454,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
int q_id = rq->q_no;
if (otx_ep_delete_oqs(otx_epvf, q_id))
- otx_ep_err("Failed to delete OQ:%d\n", q_id);
+ otx_ep_err("Failed to delete OQ:%d", q_id);
}
/**
@@ -488,16 +488,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
int retval;
if (q_no >= otx_epvf->max_tx_queues) {
- otx_ep_err("Invalid tx queue number %u\n", q_no);
+ otx_ep_err("Invalid tx queue number %u", q_no);
return -EINVAL;
}
if (num_tx_descs & (num_tx_descs - 1)) {
- otx_ep_err("Invalid tx desc number should be pow 2 %u\n",
+ otx_ep_err("Invalid tx desc number should be pow 2 %u",
num_tx_descs);
return -EINVAL;
}
if (num_tx_descs < (SDP_GBL_WMARK * 8)) {
- otx_ep_err("Invalid tx desc number(%u) should at least be greater than 8*wmark(%u)\n",
+ otx_ep_err("Invalid tx desc number(%u) should at least be greater than 8*wmark(%u)",
num_tx_descs, (SDP_GBL_WMARK * 8));
return -EINVAL;
}
@@ -505,12 +505,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
retval = otx_ep_setup_iqs(otx_epvf, q_no, num_tx_descs, socket_id);
if (retval) {
- otx_ep_err("IQ(TxQ) creation failed.\n");
+ otx_ep_err("IQ(TxQ) creation failed.");
return retval;
}
eth_dev->data->tx_queues[q_no] = otx_epvf->instr_queue[q_no];
- otx_ep_dbg("tx queue[%d] setup\n", q_no);
+ otx_ep_dbg("tx queue[%d] setup", q_no);
return 0;
}
@@ -603,23 +603,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
num_queues = otx_epvf->nb_rx_queues;
for (q_no = 0; q_no < num_queues; q_no++) {
if (otx_ep_delete_oqs(otx_epvf, q_no)) {
- otx_ep_err("Failed to delete OQ:%d\n", q_no);
+ otx_ep_err("Failed to delete OQ:%d", q_no);
return -EINVAL;
}
}
- otx_ep_dbg("Num OQs:%d freed\n", otx_epvf->nb_rx_queues);
+ otx_ep_dbg("Num OQs:%d freed", otx_epvf->nb_rx_queues);
num_queues = otx_epvf->nb_tx_queues;
for (q_no = 0; q_no < num_queues; q_no++) {
if (otx_ep_delete_iqs(otx_epvf, q_no)) {
- otx_ep_err("Failed to delete IQ:%d\n", q_no);
+ otx_ep_err("Failed to delete IQ:%d", q_no);
return -EINVAL;
}
}
- otx_ep_dbg("Num IQs:%d freed\n", otx_epvf->nb_tx_queues);
+ otx_ep_dbg("Num IQs:%d freed", otx_epvf->nb_tx_queues);
if (rte_eth_dma_zone_free(eth_dev, "ism", 0)) {
- otx_ep_err("Failed to delete ISM buffer\n");
+ otx_ep_err("Failed to delete ISM buffer");
return -EINVAL;
}
@@ -635,7 +635,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
ret = otx_ep_mbox_get_mac_addr(eth_dev, mac_addr);
if (ret)
return -EINVAL;
- otx_ep_dbg("Get MAC address " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Get MAC address " RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
return 0;
}
@@ -684,22 +684,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
ret_val = otx_ep_dev_get_mac_addr(eth_dev, mac_addr);
if (!ret_val) {
if (!rte_is_valid_assigned_ether_addr(mac_addr)) {
- otx_ep_dbg("PF doesn't have valid VF MAC addr" RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("PF doesn't have valid VF MAC addr" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
rte_eth_random_addr(mac_addr->addr_bytes);
- otx_ep_dbg("Setting Random MAC address" RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Setting Random MAC address" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
ret_val = otx_ep_dev_set_default_mac_addr(eth_dev, mac_addr);
if (ret_val) {
- otx_ep_err("Setting MAC address " RTE_ETHER_ADDR_PRT_FMT "fails\n",
+ otx_ep_err("Setting MAC address " RTE_ETHER_ADDR_PRT_FMT "fails",
RTE_ETHER_ADDR_BYTES(mac_addr));
return ret_val;
}
}
- otx_ep_dbg("Received valid MAC addr from PF" RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("Received valid MAC addr from PF" RTE_ETHER_ADDR_PRT_FMT,
RTE_ETHER_ADDR_BYTES(mac_addr));
} else {
- otx_ep_err("Getting MAC address from PF via Mbox fails with ret_val: %d\n",
+ otx_ep_err("Getting MAC address from PF via Mbox fails with ret_val: %d",
ret_val);
return ret_val;
}
@@ -734,7 +734,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->mbox_neg_ver = OTX_EP_MBOX_VERSION_V1;
eth_dev->data->mac_addrs = rte_zmalloc("otx_ep", RTE_ETHER_ADDR_LEN, 0);
if (eth_dev->data->mac_addrs == NULL) {
- otx_ep_err("MAC addresses memory allocation failed\n");
+ otx_ep_err("MAC addresses memory allocation failed");
eth_dev->dev_ops = NULL;
return -ENOMEM;
}
@@ -754,12 +754,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
otx_epvf->chip_id == PCI_DEVID_CNF10KA_EP_NET_VF ||
otx_epvf->chip_id == PCI_DEVID_CNF10KB_EP_NET_VF) {
otx_epvf->pkind = SDP_OTX2_PKIND_FS0;
- otx_ep_info("using pkind %d\n", otx_epvf->pkind);
+ otx_ep_info("using pkind %d", otx_epvf->pkind);
} else if (otx_epvf->chip_id == PCI_DEVID_OCTEONTX_EP_VF) {
otx_epvf->pkind = SDP_PKIND;
- otx_ep_info("Using pkind %d.\n", otx_epvf->pkind);
+ otx_ep_info("Using pkind %d.", otx_epvf->pkind);
} else {
- otx_ep_err("Invalid chip id\n");
+ otx_ep_err("Invalid chip id");
return -EINVAL;
}
@@ -768,7 +768,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
if (otx_ep_eth_dev_query_set_vf_mac(eth_dev,
(struct rte_ether_addr *)&vf_mac_addr)) {
- otx_ep_err("set mac addr failed\n");
+ otx_ep_err("set mac addr failed");
return -ENODEV;
}
rte_ether_addr_copy(&vf_mac_addr, eth_dev->data->mac_addrs);
diff --git a/drivers/net/octeon_ep/otx_ep_mbox.c b/drivers/net/octeon_ep/otx_ep_mbox.c
index 4118645dc7..c92adeaf9a 100644
--- a/drivers/net/octeon_ep/otx_ep_mbox.c
+++ b/drivers/net/octeon_ep/otx_ep_mbox.c
@@ -44,11 +44,11 @@ __otx_ep_send_mbox_cmd(struct otx_ep_device *otx_ep,
}
}
if (count == OTX_EP_MBOX_TIMEOUT_MS) {
- otx_ep_err("mbox send Timeout count:%d\n", count);
+ otx_ep_err("mbox send Timeout count:%d", count);
return OTX_EP_MBOX_TIMEOUT_MS;
}
if (rsp->s.type != OTX_EP_MBOX_TYPE_RSP_ACK) {
- otx_ep_err("mbox received NACK from PF\n");
+ otx_ep_err("mbox received NACK from PF");
return OTX_EP_MBOX_CMD_STATUS_NACK;
}
@@ -65,7 +65,7 @@ otx_ep_send_mbox_cmd(struct otx_ep_device *otx_ep,
rte_spinlock_lock(&otx_ep->mbox_lock);
if (otx_ep_cmd_versions[cmd.s.opcode] > otx_ep->mbox_neg_ver) {
- otx_ep_dbg("CMD:%d not supported in Version:%d\n", cmd.s.opcode,
+ otx_ep_dbg("CMD:%d not supported in Version:%d", cmd.s.opcode,
otx_ep->mbox_neg_ver);
rte_spinlock_unlock(&otx_ep->mbox_lock);
return -EOPNOTSUPP;
@@ -92,7 +92,7 @@ otx_ep_mbox_bulk_read(struct otx_ep_device *otx_ep,
/* Send cmd to read data from PF */
ret = __otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("mbox bulk read data request failed\n");
+ otx_ep_err("mbox bulk read data request failed");
rte_spinlock_unlock(&otx_ep->mbox_lock);
return ret;
}
@@ -108,7 +108,7 @@ otx_ep_mbox_bulk_read(struct otx_ep_device *otx_ep,
while (data_len) {
ret = __otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("mbox bulk read data request failed\n");
+ otx_ep_err("mbox bulk read data request failed");
otx_ep->mbox_data_index = 0;
memset(otx_ep->mbox_data_buf, 0, OTX_EP_MBOX_MAX_DATA_BUF_SIZE);
rte_spinlock_unlock(&otx_ep->mbox_lock);
@@ -154,10 +154,10 @@ otx_ep_mbox_set_mtu(struct rte_eth_dev *eth_dev, uint16_t mtu)
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("set MTU failed\n");
+ otx_ep_err("set MTU failed");
return -EINVAL;
}
- otx_ep_dbg("mtu set success mtu %u\n", mtu);
+ otx_ep_dbg("mtu set success mtu %u", mtu);
return 0;
}
@@ -178,10 +178,10 @@ otx_ep_mbox_set_mac_addr(struct rte_eth_dev *eth_dev,
cmd.s_set_mac.mac_addr[i] = mac_addr->addr_bytes[i];
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("set MAC address failed\n");
+ otx_ep_err("set MAC address failed");
return -EINVAL;
}
- otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT,
__func__, RTE_ETHER_ADDR_BYTES(mac_addr));
rte_ether_addr_copy(mac_addr, eth_dev->data->mac_addrs);
return 0;
@@ -201,12 +201,12 @@ otx_ep_mbox_get_mac_addr(struct rte_eth_dev *eth_dev,
cmd.s_set_mac.opcode = OTX_EP_MBOX_CMD_GET_MAC_ADDR;
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("get MAC address failed\n");
+ otx_ep_err("get MAC address failed");
return -EINVAL;
}
for (i = 0; i < RTE_ETHER_ADDR_LEN; i++)
mac_addr->addr_bytes[i] = rsp.s_set_mac.mac_addr[i];
- otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT "\n",
+ otx_ep_dbg("%s VF MAC " RTE_ETHER_ADDR_PRT_FMT,
__func__, RTE_ETHER_ADDR_BYTES(mac_addr));
return 0;
}
@@ -224,7 +224,7 @@ int otx_ep_mbox_get_link_status(struct rte_eth_dev *eth_dev,
cmd.s_link_status.opcode = OTX_EP_MBOX_CMD_GET_LINK_STATUS;
ret = otx_ep_send_mbox_cmd(otx_ep, cmd, &rsp);
if (ret) {
- otx_ep_err("Get link status failed\n");
+ otx_ep_err("Get link status failed");
return -EINVAL;
}
*oper_up = rsp.s_link_status.status;
@@ -242,7 +242,7 @@ int otx_ep_mbox_get_link_info(struct rte_eth_dev *eth_dev,
ret = otx_ep_mbox_bulk_read(otx_ep, OTX_EP_MBOX_CMD_GET_LINK_INFO,
(uint8_t *)&link_info, (int32_t *)&size);
if (ret) {
- otx_ep_err("Get link info failed\n");
+ otx_ep_err("Get link info failed");
return ret;
}
link->link_status = RTE_ETH_LINK_UP;
@@ -310,12 +310,12 @@ int otx_ep_mbox_version_check(struct rte_eth_dev *eth_dev)
* during initialization of PMD driver.
*/
if (ret == OTX_EP_MBOX_CMD_STATUS_NACK || rsp.s_version.version == 0) {
- otx_ep_dbg("VF Mbox version fallback to base version from:%u\n",
+ otx_ep_dbg("VF Mbox version fallback to base version from:%u",
(uint32_t)cmd.s_version.version);
return 0;
}
otx_ep->mbox_neg_ver = (uint32_t)rsp.s_version.version;
- otx_ep_dbg("VF Mbox version:%u Negotiated VF version with PF:%u\n",
+ otx_ep_dbg("VF Mbox version:%u Negotiated VF version with PF:%u",
(uint32_t)cmd.s_version.version,
(uint32_t)rsp.s_version.version);
return 0;
diff --git a/drivers/net/octeon_ep/otx_ep_rxtx.c b/drivers/net/octeon_ep/otx_ep_rxtx.c
index c421ef0a1c..65a1f304e8 100644
--- a/drivers/net/octeon_ep/otx_ep_rxtx.c
+++ b/drivers/net/octeon_ep/otx_ep_rxtx.c
@@ -22,19 +22,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
int ret = 0;
if (mz == NULL) {
- otx_ep_err("Memzone: NULL\n");
+ otx_ep_err("Memzone: NULL");
return;
}
mz_tmp = rte_memzone_lookup(mz->name);
if (mz_tmp == NULL) {
- otx_ep_err("Memzone %s Not Found\n", mz->name);
+ otx_ep_err("Memzone %s Not Found", mz->name);
return;
}
ret = rte_memzone_free(mz);
if (ret)
- otx_ep_err("Memzone free failed : ret = %d\n", ret);
+ otx_ep_err("Memzone free failed : ret = %d", ret);
}
/* Free IQ resources */
@@ -46,7 +46,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
iq = otx_ep->instr_queue[iq_no];
if (iq == NULL) {
- otx_ep_err("Invalid IQ[%d]\n", iq_no);
+ otx_ep_err("Invalid IQ[%d]", iq_no);
return -EINVAL;
}
@@ -68,7 +68,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
otx_ep->nb_tx_queues--;
- otx_ep_info("IQ[%d] is deleted\n", iq_no);
+ otx_ep_info("IQ[%d] is deleted", iq_no);
return 0;
}
@@ -94,7 +94,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
OTX_EP_PCI_RING_ALIGN,
socket_id);
if (iq->iq_mz == NULL) {
- otx_ep_err("IQ[%d] memzone alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] memzone alloc failed", iq_no);
goto iq_init_fail;
}
@@ -102,7 +102,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->base_addr = (uint8_t *)iq->iq_mz->addr;
if (num_descs & (num_descs - 1)) {
- otx_ep_err("IQ[%d] descs not in power of 2\n", iq_no);
+ otx_ep_err("IQ[%d] descs not in power of 2", iq_no);
goto iq_init_fail;
}
@@ -117,7 +117,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
RTE_CACHE_LINE_SIZE,
rte_socket_id());
if (iq->req_list == NULL) {
- otx_ep_err("IQ[%d] req_list alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] req_list alloc failed", iq_no);
goto iq_init_fail;
}
@@ -125,7 +125,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
sg = rte_zmalloc_socket("sg_entry", (OTX_EP_MAX_SG_LISTS * OTX_EP_SG_ENTRY_SIZE),
OTX_EP_SG_ALIGN, rte_socket_id());
if (sg == NULL) {
- otx_ep_err("IQ[%d] sg_entries alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] sg_entries alloc failed", iq_no);
goto iq_init_fail;
}
@@ -133,14 +133,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
iq->req_list[i].finfo.g.sg = sg;
}
- otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d\n",
+ otx_ep_info("IQ[%d]: base: %p basedma: %lx count: %d",
iq_no, iq->base_addr, (unsigned long)iq->base_addr_dma,
iq->nb_desc);
iq->mbuf_list = rte_zmalloc_socket("mbuf_list", (iq->nb_desc * sizeof(struct rte_mbuf *)),
RTE_CACHE_LINE_SIZE, rte_socket_id());
if (!iq->mbuf_list) {
- otx_ep_err("IQ[%d] mbuf_list alloc failed\n", iq_no);
+ otx_ep_err("IQ[%d] mbuf_list alloc failed", iq_no);
goto iq_init_fail;
}
@@ -185,12 +185,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
otx_ep->instr_queue[iq_no] = iq;
if (otx_ep_init_instr_queue(otx_ep, iq_no, num_descs, socket_id)) {
- otx_ep_err("IQ init is failed\n");
+ otx_ep_err("IQ init is failed");
goto delete_IQ;
}
otx_ep->nb_tx_queues++;
- otx_ep_info("IQ[%d] is created.\n", iq_no);
+ otx_ep_info("IQ[%d] is created.", iq_no);
return 0;
@@ -233,7 +233,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
droq = otx_ep->droq[oq_no];
if (droq == NULL) {
- otx_ep_err("Invalid droq[%d]\n", oq_no);
+ otx_ep_err("Invalid droq[%d]", oq_no);
return -EINVAL;
}
@@ -253,7 +253,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
otx_ep->nb_rx_queues--;
- otx_ep_info("OQ[%d] is deleted\n", oq_no);
+ otx_ep_info("OQ[%d] is deleted", oq_no);
return 0;
}
@@ -268,7 +268,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
for (idx = 0; idx < droq->nb_desc; idx++) {
buf = rte_pktmbuf_alloc(droq->mpool);
if (buf == NULL) {
- otx_ep_err("OQ buffer alloc failed\n");
+ otx_ep_err("OQ buffer alloc failed");
droq->stats.rx_alloc_failure++;
return -ENOMEM;
}
@@ -296,7 +296,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
uint32_t desc_ring_size;
int ret;
- otx_ep_info("OQ[%d] Init start\n", q_no);
+ otx_ep_info("OQ[%d] Init start", q_no);
droq = otx_ep->droq[q_no];
droq->otx_ep_dev = otx_ep;
@@ -316,23 +316,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
socket_id);
if (droq->desc_ring_mz == NULL) {
- otx_ep_err("OQ:%d desc_ring allocation failed\n", q_no);
+ otx_ep_err("OQ:%d desc_ring allocation failed", q_no);
goto init_droq_fail;
}
droq->desc_ring_dma = droq->desc_ring_mz->iova;
droq->desc_ring = (struct otx_ep_droq_desc *)droq->desc_ring_mz->addr;
- otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx\n",
+ otx_ep_dbg("OQ[%d]: desc_ring: virt: 0x%p, dma: %lx",
q_no, droq->desc_ring, (unsigned long)droq->desc_ring_dma);
- otx_ep_dbg("OQ[%d]: num_desc: %d\n", q_no, droq->nb_desc);
+ otx_ep_dbg("OQ[%d]: num_desc: %d", q_no, droq->nb_desc);
/* OQ buf_list set up */
droq->recv_buf_list = rte_zmalloc_socket("recv_buf_list",
(droq->nb_desc * sizeof(struct rte_mbuf *)),
RTE_CACHE_LINE_SIZE, socket_id);
if (droq->recv_buf_list == NULL) {
- otx_ep_err("OQ recv_buf_list alloc failed\n");
+ otx_ep_err("OQ recv_buf_list alloc failed");
goto init_droq_fail;
}
@@ -366,17 +366,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
droq = (struct otx_ep_droq *)rte_zmalloc("otx_ep_OQ",
sizeof(*droq), RTE_CACHE_LINE_SIZE);
if (droq == NULL) {
- otx_ep_err("Droq[%d] Creation Failed\n", oq_no);
+ otx_ep_err("Droq[%d] Creation Failed", oq_no);
return -ENOMEM;
}
otx_ep->droq[oq_no] = droq;
if (otx_ep_init_droq(otx_ep, oq_no, num_descs, desc_size, mpool,
socket_id)) {
- otx_ep_err("Droq[%d] Initialization failed\n", oq_no);
+ otx_ep_err("Droq[%d] Initialization failed", oq_no);
goto delete_OQ;
}
- otx_ep_info("OQ[%d] is created.\n", oq_no);
+ otx_ep_info("OQ[%d] is created.", oq_no);
otx_ep->nb_rx_queues++;
@@ -401,12 +401,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
case OTX_EP_REQTYPE_NORESP_GATHER:
/* This will take care of multiple segments also */
rte_pktmbuf_free(mbuf);
- otx_ep_dbg("IQ buffer freed at idx[%d]\n", idx);
+ otx_ep_dbg("IQ buffer freed at idx[%d]", idx);
break;
case OTX_EP_REQTYPE_NONE:
default:
- otx_ep_info("This iqreq mode is not supported:%d\n", reqtype);
+ otx_ep_info("This iqreq mode is not supported:%d", reqtype);
}
/* Reset the request list at this index */
@@ -568,7 +568,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
num_sg = (frags + mask) / OTX_EP_NUM_SG_PTRS;
if (unlikely(pkt_len > OTX_EP_MAX_PKT_SZ && num_sg > OTX_EP_MAX_SG_LISTS)) {
- otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments\n");
+ otx_ep_err("Failed to xmit the pkt, pkt_len is higher or pkt has more segments");
goto exit;
}
@@ -644,16 +644,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
iqcmd.irh.u64 = rte_bswap64(iqcmd.irh.u64);
#ifdef OTX_EP_IO_DEBUG
- otx_ep_dbg("After swapping\n");
- otx_ep_dbg("Word0 [dptr]: 0x%016lx\n",
+ otx_ep_dbg("After swapping");
+ otx_ep_dbg("Word0 [dptr]: 0x%016lx",
(unsigned long)iqcmd.dptr);
- otx_ep_dbg("Word1 [ihtx]: 0x%016lx\n", (unsigned long)iqcmd.ih);
- otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx\n",
+ otx_ep_dbg("Word1 [ihtx]: 0x%016lx", (unsigned long)iqcmd.ih);
+ otx_ep_dbg("Word2 [pki_ih3]: 0x%016lx",
(unsigned long)iqcmd.pki_ih3);
- otx_ep_dbg("Word3 [rptr]: 0x%016lx\n",
+ otx_ep_dbg("Word3 [rptr]: 0x%016lx",
(unsigned long)iqcmd.rptr);
- otx_ep_dbg("Word4 [irh]: 0x%016lx\n", (unsigned long)iqcmd.irh);
- otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx\n",
+ otx_ep_dbg("Word4 [irh]: 0x%016lx", (unsigned long)iqcmd.irh);
+ otx_ep_dbg("Word5 [exhdr[0]]: 0x%016lx",
(unsigned long)iqcmd.exhdr[0]);
rte_pktmbuf_dump(stdout, m, rte_pktmbuf_pkt_len(m));
#endif
@@ -726,7 +726,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
if (unlikely(!info->length)) {
int retry = OTX_EP_MAX_DELAYED_PKT_RETRIES;
/* otx_ep_dbg("OCTEON DROQ[%d]: read_idx: %d; Data not ready "
- * "yet, Retry; pending=%lu\n", droq->q_no, droq->read_idx,
+ * "yet, Retry; pending=%lu", droq->q_no, droq->read_idx,
* droq->pkts_pending);
*/
droq->stats.pkts_delayed_data++;
@@ -735,7 +735,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
rte_delay_us_block(50);
}
if (!retry && !info->length) {
- otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!\n",
+ otx_ep_err("OCTEON DROQ[%d]: read_idx: %d; Retry failed !!",
droq->q_no, droq->read_idx);
/* May be zero length packet; drop it */
assert(0);
@@ -803,7 +803,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
last_buf = mbuf;
} else {
- otx_ep_err("no buf\n");
+ otx_ep_err("no buf");
assert(0);
}
diff --git a/drivers/net/octeon_ep/otx_ep_vf.c b/drivers/net/octeon_ep/otx_ep_vf.c
index 236b7a874c..7defb0f13d 100644
--- a/drivers/net/octeon_ep/otx_ep_vf.c
+++ b/drivers/net/octeon_ep/otx_ep_vf.c
@@ -142,7 +142,7 @@ otx_ep_setup_iq_regs(struct otx_ep_device *otx_ep, uint32_t iq_no)
iq->inst_cnt_reg = (uint8_t *)otx_ep->hw_addr +
OTX_EP_R_IN_CNTS(iq_no);
- otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p\n",
+ otx_ep_dbg("InstQ[%d]:dbell reg @ 0x%p inst_cnt_reg @ 0x%p",
iq_no, iq->doorbell_reg, iq->inst_cnt_reg);
loop = OTX_EP_BUSY_LOOP_COUNT;
@@ -220,14 +220,14 @@ otx_ep_setup_oq_regs(struct otx_ep_device *otx_ep, uint32_t oq_no)
}
if (loop < 0)
return -EIO;
- otx_ep_dbg("OTX_EP_R[%d]_credit:%x\n", oq_no,
+ otx_ep_dbg("OTX_EP_R[%d]_credit:%x", oq_no,
rte_read32(droq->pkts_credit_reg));
/* Clear the OQ_OUT_CNTS doorbell */
reg_val = rte_read32(droq->pkts_sent_reg);
rte_write32((uint32_t)reg_val, droq->pkts_sent_reg);
- otx_ep_dbg("OTX_EP_R[%d]_sent: %x\n", oq_no,
+ otx_ep_dbg("OTX_EP_R[%d]_sent: %x", oq_no,
rte_read32(droq->pkts_sent_reg));
loop = OTX_EP_BUSY_LOOP_COUNT;
@@ -259,7 +259,7 @@ otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
}
if (loop < 0) {
- otx_ep_err("dbell reset failed\n");
+ otx_ep_err("dbell reset failed");
return -EIO;
}
@@ -269,7 +269,7 @@ otx_ep_enable_iq(struct otx_ep_device *otx_ep, uint32_t q_no)
otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_IN_ENABLE(q_no));
- otx_ep_info("IQ[%d] enable done\n", q_no);
+ otx_ep_info("IQ[%d] enable done", q_no);
return 0;
}
@@ -290,7 +290,7 @@ otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
rte_delay_ms(1);
}
if (loop < 0) {
- otx_ep_err("dbell reset failed\n");
+ otx_ep_err("dbell reset failed");
return -EIO;
}
@@ -299,7 +299,7 @@ otx_ep_enable_oq(struct otx_ep_device *otx_ep, uint32_t q_no)
reg_val |= 0x1ull;
otx_ep_write64(reg_val, otx_ep->hw_addr, OTX_EP_R_OUT_ENABLE(q_no));
- otx_ep_info("OQ[%d] enable done\n", q_no);
+ otx_ep_info("OQ[%d] enable done", q_no);
return 0;
}
@@ -402,10 +402,10 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
if (otx_ep->conf == NULL) {
otx_ep->conf = otx_ep_get_defconf(otx_ep);
if (otx_ep->conf == NULL) {
- otx_ep_err("OTX_EP VF default config not found\n");
+ otx_ep_err("OTX_EP VF default config not found");
return -ENOENT;
}
- otx_ep_info("Default config is used\n");
+ otx_ep_info("Default config is used");
}
/* Get IOQs (RPVF] count */
@@ -414,7 +414,7 @@ otx_ep_vf_setup_device(struct otx_ep_device *otx_ep)
otx_ep->sriov_info.rings_per_vf = ((reg_val >> OTX_EP_R_IN_CTL_RPVF_POS)
& OTX_EP_R_IN_CTL_RPVF_MASK);
- otx_ep_info("OTX_EP RPVF: %d\n", otx_ep->sriov_info.rings_per_vf);
+ otx_ep_info("OTX_EP RPVF: %d", otx_ep->sriov_info.rings_per_vf);
otx_ep->fn_list.setup_iq_regs = otx_ep_setup_iq_regs;
otx_ep->fn_list.setup_oq_regs = otx_ep_setup_oq_regs;
diff --git a/drivers/net/octeontx/base/octeontx_pkovf.c b/drivers/net/octeontx/base/octeontx_pkovf.c
index 5d445dfb49..7aec84a813 100644
--- a/drivers/net/octeontx/base/octeontx_pkovf.c
+++ b/drivers/net/octeontx/base/octeontx_pkovf.c
@@ -364,7 +364,7 @@ octeontx_pko_chan_stop(struct octeontx_pko_vf_ctl_s *ctl, uint64_t chanid)
res = octeontx_pko_dq_close(dq);
if (res < 0)
- octeontx_log_err("closing DQ%d failed\n", dq);
+ octeontx_log_err("closing DQ%d failed", dq);
dq_cnt++;
dq++;
diff --git a/drivers/net/octeontx/octeontx_ethdev.c b/drivers/net/octeontx/octeontx_ethdev.c
index 2a8378a33e..5f0cd1bb7f 100644
--- a/drivers/net/octeontx/octeontx_ethdev.c
+++ b/drivers/net/octeontx/octeontx_ethdev.c
@@ -1223,7 +1223,7 @@ octeontx_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (dev->data->tx_queues[qid]) {
res = octeontx_dev_tx_queue_stop(dev, qid);
if (res < 0)
- octeontx_log_err("failed stop tx_queue(%d)\n", qid);
+ octeontx_log_err("failed stop tx_queue(%d)", qid);
rte_free(dev->data->tx_queues[qid]);
}
@@ -1342,7 +1342,7 @@ octeontx_dev_rx_queue_setup(struct rte_eth_dev *dev, uint16_t qidx,
/* Verify queue index */
if (qidx >= dev->data->nb_rx_queues) {
- octeontx_log_err("QID %d not supported (0 - %d available)\n",
+ octeontx_log_err("QID %d not supported (0 - %d available)",
qidx, (dev->data->nb_rx_queues - 1));
return -ENOTSUP;
}
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index bfec085045..9626c343dc 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -1093,11 +1093,11 @@ set_iface_direction(const char *iface, pcap_t *pcap,
{
const char *direction_str = (direction == PCAP_D_IN) ? "IN" : "OUT";
if (pcap_setdirection(pcap, direction) < 0) {
- PMD_LOG(ERR, "Setting %s pcap direction %s failed - %s\n",
+ PMD_LOG(ERR, "Setting %s pcap direction %s failed - %s",
iface, direction_str, pcap_geterr(pcap));
return -1;
}
- PMD_LOG(INFO, "Setting %s pcap direction %s\n",
+ PMD_LOG(INFO, "Setting %s pcap direction %s",
iface, direction_str);
return 0;
}
diff --git a/drivers/net/pfe/pfe_ethdev.c b/drivers/net/pfe/pfe_ethdev.c
index 0073dd7405..dc04a52639 100644
--- a/drivers/net/pfe/pfe_ethdev.c
+++ b/drivers/net/pfe/pfe_ethdev.c
@@ -161,7 +161,7 @@ pfe_recv_pkts_on_intr(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
writel(readl(HIF_INT_ENABLE) | HIF_RXPKT_INT, HIF_INT_ENABLE);
ret = epoll_wait(priv->pfe->hif.epoll_fd, &epoll_ev, 1, ticks);
if (ret < 0 && errno != EINTR)
- PFE_PMD_ERR("epoll_wait fails with %d\n", errno);
+ PFE_PMD_ERR("epoll_wait fails with %d", errno);
}
return work_done;
@@ -338,9 +338,9 @@ pfe_eth_open_cdev(struct pfe_eth_priv_s *priv)
pfe_cdev_fd = open(PFE_CDEV_PATH, O_RDONLY);
if (pfe_cdev_fd < 0) {
- PFE_PMD_WARN("Unable to open PFE device file (%s).\n",
+ PFE_PMD_WARN("Unable to open PFE device file (%s).",
PFE_CDEV_PATH);
- PFE_PMD_WARN("Link status update will not be available.\n");
+ PFE_PMD_WARN("Link status update will not be available.");
priv->link_fd = PFE_CDEV_INVALID_FD;
return -1;
}
@@ -582,16 +582,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
ret = ioctl(priv->link_fd, ioctl_cmd, &lstatus);
if (ret != 0) {
- PFE_PMD_ERR("Unable to fetch link status (ioctl)\n");
+ PFE_PMD_ERR("Unable to fetch link status (ioctl)");
return -1;
}
- PFE_PMD_DEBUG("Fetched link state (%d) for dev %d.\n",
+ PFE_PMD_DEBUG("Fetched link state (%d) for dev %d.",
lstatus, priv->id);
}
if (old.link_status == lstatus) {
/* no change in status */
- PFE_PMD_DEBUG("No change in link status; Not updating.\n");
+ PFE_PMD_DEBUG("No change in link status; Not updating.");
return -1;
}
@@ -602,7 +602,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
pfe_eth_atomic_write_link_status(dev, &link);
- PFE_PMD_INFO("Port (%d) link is %s\n", dev->data->port_id,
+ PFE_PMD_INFO("Port (%d) link is %s", dev->data->port_id,
link.link_status ? "up" : "down");
return 0;
@@ -992,24 +992,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
addr = of_get_address(np, 0, &cbus_size, NULL);
if (!addr) {
- PFE_PMD_ERR("of_get_address cannot return qman address\n");
+ PFE_PMD_ERR("of_get_address cannot return qman address");
goto err;
}
cbus_addr = of_translate_address(np, addr);
if (!cbus_addr) {
- PFE_PMD_ERR("of_translate_address failed\n");
+ PFE_PMD_ERR("of_translate_address failed");
goto err;
}
addr = of_get_address(np, 1, &ddr_size, NULL);
if (!addr) {
- PFE_PMD_ERR("of_get_address cannot return qman address\n");
+ PFE_PMD_ERR("of_get_address cannot return qman address");
goto err;
}
g_pfe->ddr_phys_baseaddr = of_translate_address(np, addr);
if (!g_pfe->ddr_phys_baseaddr) {
- PFE_PMD_ERR("of_translate_address failed\n");
+ PFE_PMD_ERR("of_translate_address failed");
goto err;
}
diff --git a/drivers/net/pfe/pfe_hif.c b/drivers/net/pfe/pfe_hif.c
index e2b23bbeb7..abb9cde996 100644
--- a/drivers/net/pfe/pfe_hif.c
+++ b/drivers/net/pfe/pfe_hif.c
@@ -309,7 +309,7 @@ client_put_rxpacket(struct hif_rx_queue *queue,
if (readl(&desc->ctrl) & CL_DESC_OWN) {
mbuf = rte_cpu_to_le_64(rte_pktmbuf_alloc(pool));
if (unlikely(!mbuf)) {
- PFE_PMD_WARN("Buffer allocation failure\n");
+ PFE_PMD_WARN("Buffer allocation failure");
return NULL;
}
@@ -770,9 +770,9 @@ pfe_hif_rx_idle(struct pfe_hif *hif)
} while (--hif_stop_loop);
if (readl(HIF_RX_STATUS) & BDP_CSR_RX_DMA_ACTV)
- PFE_PMD_ERR("Failed\n");
+ PFE_PMD_ERR("Failed");
else
- PFE_PMD_INFO("Done\n");
+ PFE_PMD_INFO("Done");
}
#endif
@@ -806,7 +806,7 @@ pfe_hif_init(struct pfe *pfe)
pfe_cdev_fd = open(PFE_CDEV_PATH, O_RDWR);
if (pfe_cdev_fd < 0) {
- PFE_PMD_WARN("Unable to open PFE device file (%s).\n",
+ PFE_PMD_WARN("Unable to open PFE device file (%s).",
PFE_CDEV_PATH);
pfe->cdev_fd = PFE_CDEV_INVALID_FD;
return -1;
@@ -817,7 +817,7 @@ pfe_hif_init(struct pfe *pfe)
/* hif interrupt enable */
err = ioctl(pfe->cdev_fd, PFE_CDEV_HIF_INTR_EN, &event_fd);
if (err) {
- PFE_PMD_ERR("\nioctl failed for intr enable err: %d\n",
+ PFE_PMD_ERR("ioctl failed for intr enable err: %d",
errno);
goto err0;
}
@@ -826,7 +826,7 @@ pfe_hif_init(struct pfe *pfe)
epoll_ev.data.fd = event_fd;
err = epoll_ctl(epoll_fd, EPOLL_CTL_ADD, event_fd, &epoll_ev);
if (err < 0) {
- PFE_PMD_ERR("epoll_ctl failed with err = %d\n", errno);
+ PFE_PMD_ERR("epoll_ctl failed with err = %d", errno);
goto err0;
}
pfe->hif.epoll_fd = epoll_fd;
diff --git a/drivers/net/pfe/pfe_hif_lib.c b/drivers/net/pfe/pfe_hif_lib.c
index 6fe6d33d23..541ba365c6 100644
--- a/drivers/net/pfe/pfe_hif_lib.c
+++ b/drivers/net/pfe/pfe_hif_lib.c
@@ -157,7 +157,7 @@ hif_lib_client_init_rx_buffers(struct hif_client_s *client,
queue->queue_id = 0;
queue->port_id = client->port_id;
queue->priv = client->priv;
- PFE_PMD_DEBUG("rx queue: %d, base: %p, size: %d\n", qno,
+ PFE_PMD_DEBUG("rx queue: %d, base: %p, size: %d", qno,
queue->base, queue->size);
}
diff --git a/drivers/net/qede/qede_rxtx.c b/drivers/net/qede/qede_rxtx.c
index c35585f5fd..dcc8cbe943 100644
--- a/drivers/net/qede/qede_rxtx.c
+++ b/drivers/net/qede/qede_rxtx.c
@@ -887,7 +887,7 @@ qede_free_tx_pkt(struct qede_tx_queue *txq)
mbuf = txq->sw_tx_ring[idx];
if (mbuf) {
nb_segs = mbuf->nb_segs;
- PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u\n", nb_segs);
+ PMD_TX_LOG(DEBUG, txq, "nb_segs to free %u", nb_segs);
while (nb_segs) {
/* It's like consuming rxbuf in recv() */
ecore_chain_consume(&txq->tx_pbl);
@@ -897,7 +897,7 @@ qede_free_tx_pkt(struct qede_tx_queue *txq)
rte_pktmbuf_free(mbuf);
txq->sw_tx_ring[idx] = NULL;
txq->sw_tx_cons++;
- PMD_TX_LOG(DEBUG, txq, "Freed tx packet\n");
+ PMD_TX_LOG(DEBUG, txq, "Freed tx packet");
} else {
ecore_chain_consume(&txq->tx_pbl);
txq->nb_tx_avail++;
@@ -919,7 +919,7 @@ qede_process_tx_compl(__rte_unused struct ecore_dev *edev,
#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
sw_tx_cons = ecore_chain_get_cons_idx(&txq->tx_pbl);
- PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u\n",
+ PMD_TX_LOG(DEBUG, txq, "Tx Completions = %u",
abs(hw_bd_cons - sw_tx_cons));
#endif
while (hw_bd_cons != ecore_chain_get_cons_idx(&txq->tx_pbl))
@@ -1353,7 +1353,7 @@ qede_rx_process_tpa_cmn_cont_end_cqe(__rte_unused struct qede_dev *qdev,
tpa_info->tpa_tail = curr_frag;
qede_rx_bd_ring_consume(rxq);
if (unlikely(qede_alloc_rx_buffer(rxq) != 0)) {
- PMD_RX_LOG(ERR, rxq, "mbuf allocation fails\n");
+ PMD_RX_LOG(ERR, rxq, "mbuf allocation fails");
rte_eth_devices[rxq->port_id].data->rx_mbuf_alloc_failed++;
rxq->rx_alloc_errors++;
}
@@ -1365,7 +1365,7 @@ qede_rx_process_tpa_cont_cqe(struct qede_dev *qdev,
struct qede_rx_queue *rxq,
struct eth_fast_path_rx_tpa_cont_cqe *cqe)
{
- PMD_RX_LOG(INFO, rxq, "TPA cont[%d] - len [%d]\n",
+ PMD_RX_LOG(INFO, rxq, "TPA cont[%d] - len [%d]",
cqe->tpa_agg_index, rte_le_to_cpu_16(cqe->len_list[0]));
/* only len_list[0] will have value */
qede_rx_process_tpa_cmn_cont_end_cqe(qdev, rxq, cqe->tpa_agg_index,
@@ -1388,7 +1388,7 @@ qede_rx_process_tpa_end_cqe(struct qede_dev *qdev,
rx_mb->pkt_len = cqe->total_packet_len;
PMD_RX_LOG(INFO, rxq, "TPA End[%d] reason %d cqe_len %d nb_segs %d"
- " pkt_len %d\n", cqe->tpa_agg_index, cqe->end_reason,
+ " pkt_len %d", cqe->tpa_agg_index, cqe->end_reason,
rte_le_to_cpu_16(cqe->len_list[0]), rx_mb->nb_segs,
rx_mb->pkt_len);
}
@@ -1471,7 +1471,7 @@ qede_process_sg_pkts(void *p_rxq, struct rte_mbuf *rx_mb,
pkt_len;
if (unlikely(!cur_size)) {
PMD_RX_LOG(ERR, rxq, "Length is 0 while %u BDs"
- " left for mapping jumbo\n", num_segs);
+ " left for mapping jumbo", num_segs);
qede_recycle_rx_bd_ring(rxq, qdev, num_segs);
return -EINVAL;
}
@@ -1497,7 +1497,7 @@ print_rx_bd_info(struct rte_mbuf *m, struct qede_rx_queue *rxq,
PMD_RX_LOG(INFO, rxq,
"len 0x%04x bf 0x%04x hash_val 0x%x"
" ol_flags 0x%04lx l2=%s l3=%s l4=%s tunn=%s"
- " inner_l2=%s inner_l3=%s inner_l4=%s\n",
+ " inner_l2=%s inner_l3=%s inner_l4=%s",
m->data_len, bitfield, m->hash.rss,
(unsigned long)m->ol_flags,
rte_get_ptype_l2_name(m->packet_type),
@@ -1548,7 +1548,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
PMD_RX_LOG(ERR, rxq,
"New buffers allocation failed,"
- "dropping incoming packets\n");
+ "dropping incoming packets");
dev = &rte_eth_devices[rxq->port_id];
dev->data->rx_mbuf_alloc_failed += count;
rxq->rx_alloc_errors += count;
@@ -1579,13 +1579,13 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
cqe =
(union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
cqe_type = cqe->fast_path_regular.type;
- PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+ PMD_RX_LOG(INFO, rxq, "Rx CQE type %d", cqe_type);
if (likely(cqe_type == ETH_RX_CQE_TYPE_REGULAR)) {
fp_cqe = &cqe->fast_path_regular;
} else {
if (cqe_type == ETH_RX_CQE_TYPE_SLOW_PATH) {
- PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
+ PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE");
ecore_eth_cqe_completion
(&edev->hwfns[rxq->queue_id %
edev->num_hwfns],
@@ -1611,10 +1611,10 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
#endif
if (unlikely(qede_tunn_exist(parse_flag))) {
- PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
+ PMD_RX_LOG(INFO, rxq, "Rx tunneled packet");
if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1624,7 +1624,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "Outer L3 csum failed, flags = 0x%x\n",
+ "Outer L3 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
@@ -1659,7 +1659,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1667,7 +1667,7 @@ qede_recv_pkts_regular(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
}
if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
- PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
+ PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
@@ -1776,7 +1776,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
PMD_RX_LOG(ERR, rxq,
"New buffers allocation failed,"
- "dropping incoming packets\n");
+ "dropping incoming packets");
dev = &rte_eth_devices[rxq->port_id];
dev->data->rx_mbuf_alloc_failed += count;
rxq->rx_alloc_errors += count;
@@ -1805,7 +1805,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
cqe =
(union eth_rx_cqe *)ecore_chain_consume(&rxq->rx_comp_ring);
cqe_type = cqe->fast_path_regular.type;
- PMD_RX_LOG(INFO, rxq, "Rx CQE type %d\n", cqe_type);
+ PMD_RX_LOG(INFO, rxq, "Rx CQE type %d", cqe_type);
switch (cqe_type) {
case ETH_RX_CQE_TYPE_REGULAR:
@@ -1823,7 +1823,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
PMD_RX_LOG(INFO, rxq,
"TPA start[%d] - len_on_first_bd %d header %d"
- " [bd_list[0] %d], [seg_len %d]\n",
+ " [bd_list[0] %d], [seg_len %d]",
cqe_start_tpa->tpa_agg_index,
rte_le_to_cpu_16(cqe_start_tpa->len_on_first_bd),
cqe_start_tpa->header_len,
@@ -1843,7 +1843,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rx_mb = rxq->tpa_info[tpa_agg_idx].tpa_head;
goto tpa_end;
case ETH_RX_CQE_TYPE_SLOW_PATH:
- PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE\n");
+ PMD_RX_LOG(INFO, rxq, "Got unexpected slowpath CQE");
ecore_eth_cqe_completion(
&edev->hwfns[rxq->queue_id % edev->num_hwfns],
(struct eth_slow_path_rx_cqe *)cqe);
@@ -1881,10 +1881,10 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rss_hash = rte_le_to_cpu_32(cqe_start_tpa->rss_hash);
}
if (qede_tunn_exist(parse_flag)) {
- PMD_RX_LOG(INFO, rxq, "Rx tunneled packet\n");
+ PMD_RX_LOG(INFO, rxq, "Rx tunneled packet");
if (unlikely(qede_check_tunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1894,7 +1894,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (unlikely(qede_check_tunn_csum_l3(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "Outer L3 csum failed, flags = 0x%x\n",
+ "Outer L3 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD;
@@ -1933,7 +1933,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
*/
if (unlikely(qede_check_notunn_csum_l4(parse_flag))) {
PMD_RX_LOG(ERR, rxq,
- "L4 csum failed, flags = 0x%x\n",
+ "L4 csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD;
@@ -1941,7 +1941,7 @@ qede_recv_pkts(void *p_rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD;
}
if (unlikely(qede_check_notunn_csum_l3(rx_mb, parse_flag))) {
- PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x\n",
+ PMD_RX_LOG(ERR, rxq, "IP csum failed, flags = 0x%x",
parse_flag);
rxq->rx_hw_errors++;
ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD;
@@ -2117,13 +2117,13 @@ print_tx_bd_info(struct qede_tx_queue *txq,
rte_cpu_to_le_16(bd1->data.bitfields));
if (bd2)
PMD_TX_LOG(INFO, txq,
- "BD2: nbytes=0x%04x bf1=0x%04x bf2=0x%04x tunn_ip=0x%04x\n",
+ "BD2: nbytes=0x%04x bf1=0x%04x bf2=0x%04x tunn_ip=0x%04x",
rte_cpu_to_le_16(bd2->nbytes), bd2->data.bitfields1,
bd2->data.bitfields2, bd2->data.tunn_ip_size);
if (bd3)
PMD_TX_LOG(INFO, txq,
"BD3: nbytes=0x%04x bf=0x%04x MSS=0x%04x "
- "tunn_l4_hdr_start_offset_w=0x%04x tunn_hdr_size=0x%04x\n",
+ "tunn_l4_hdr_start_offset_w=0x%04x tunn_hdr_size=0x%04x",
rte_cpu_to_le_16(bd3->nbytes),
rte_cpu_to_le_16(bd3->data.bitfields),
rte_cpu_to_le_16(bd3->data.lso_mss),
@@ -2131,7 +2131,7 @@ print_tx_bd_info(struct qede_tx_queue *txq,
bd3->data.tunn_hdr_size_w);
rte_get_tx_ol_flag_list(tx_ol_flags, ol_buf, sizeof(ol_buf));
- PMD_TX_LOG(INFO, txq, "TX offloads = %s\n", ol_buf);
+ PMD_TX_LOG(INFO, txq, "TX offloads = %s", ol_buf);
}
#endif
@@ -2201,7 +2201,7 @@ qede_xmit_prep_pkts(__rte_unused void *p_txq, struct rte_mbuf **tx_pkts,
#ifdef RTE_LIBRTE_QEDE_DEBUG_TX
if (unlikely(i != nb_pkts))
- PMD_TX_LOG(ERR, txq, "TX prepare failed for %u\n",
+ PMD_TX_LOG(ERR, txq, "TX prepare failed for %u",
nb_pkts - i);
#endif
return i;
@@ -2215,16 +2215,16 @@ qede_mpls_tunn_tx_sanity_check(struct rte_mbuf *mbuf,
struct qede_tx_queue *txq)
{
if (((mbuf->outer_l2_len + mbuf->outer_l3_len) / 2) > 0xff)
- PMD_TX_LOG(ERR, txq, "tunn_l4_hdr_start_offset overflow\n");
+ PMD_TX_LOG(ERR, txq, "tunn_l4_hdr_start_offset overflow");
if (((mbuf->outer_l2_len + mbuf->outer_l3_len +
MPLSINUDP_HDR_SIZE) / 2) > 0xff)
- PMD_TX_LOG(ERR, txq, "tunn_hdr_size overflow\n");
+ PMD_TX_LOG(ERR, txq, "tunn_hdr_size overflow");
if (((mbuf->l2_len - MPLSINUDP_HDR_SIZE) / 2) >
ETH_TX_DATA_2ND_BD_TUNN_INNER_L2_HDR_SIZE_W_MASK)
- PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow\n");
+ PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow");
if (((mbuf->l2_len - MPLSINUDP_HDR_SIZE + mbuf->l3_len) / 2) >
ETH_TX_DATA_2ND_BD_L4_HDR_START_OFFSET_W_MASK)
- PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow\n");
+ PMD_TX_LOG(ERR, txq, "inner_l2_hdr_size overflow");
}
#endif
diff --git a/drivers/net/thunderx/nicvf_ethdev.c b/drivers/net/thunderx/nicvf_ethdev.c
index ba2ef4058e..ee563c55ce 100644
--- a/drivers/net/thunderx/nicvf_ethdev.c
+++ b/drivers/net/thunderx/nicvf_ethdev.c
@@ -1817,7 +1817,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
/* Apply new link configurations if changed */
ret = nicvf_apply_link_speed(dev);
if (ret) {
- PMD_INIT_LOG(ERR, "Failed to set link configuration\n");
+ PMD_INIT_LOG(ERR, "Failed to set link configuration");
return ret;
}
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index ad29c3cfec..a8bdc10232 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -612,7 +612,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
ssid = txgbe_flash_read_dword(hw, 0xFFFDC);
if (ssid == 0x1) {
PMD_INIT_LOG(ERR,
- "Read of internal subsystem device id failed\n");
+ "Read of internal subsystem device id failed");
return -ENODEV;
}
hw->subsystem_device_id = (u16)ssid >> 8 | (u16)ssid << 8;
@@ -2756,7 +2756,7 @@ txgbe_dev_detect_sfp(void *param)
PMD_DRV_LOG(INFO, "SFP not present.");
} else if (err == 0) {
hw->mac.setup_sfp(hw);
- PMD_DRV_LOG(INFO, "detected SFP+: %d\n", hw->phy.sfp_type);
+ PMD_DRV_LOG(INFO, "detected SFP+: %d", hw->phy.sfp_type);
txgbe_dev_setup_link_alarm_handler(dev);
txgbe_dev_link_update(dev, 0);
}
diff --git a/drivers/net/txgbe/txgbe_ipsec.c b/drivers/net/txgbe/txgbe_ipsec.c
index f9f8108fb8..4af49dd802 100644
--- a/drivers/net/txgbe/txgbe_ipsec.c
+++ b/drivers/net/txgbe/txgbe_ipsec.c
@@ -100,7 +100,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
/* Fail if no match and no free entries*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx IP table\n");
+ "No free entry left in the Rx IP table");
return -1;
}
@@ -114,7 +114,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Rx SA table\n");
+ "No free entry left in the Rx SA table");
return -1;
}
@@ -210,7 +210,7 @@ txgbe_crypto_add_sa(struct txgbe_crypto_session *ic_session)
/* Fail if no free entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "No free entry left in the Tx SA table\n");
+ "No free entry left in the Tx SA table");
return -1;
}
@@ -269,7 +269,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (ip_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx IP table\n");
+ "Entry not found in the Rx IP table");
return -1;
}
@@ -284,7 +284,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Rx SA table\n");
+ "Entry not found in the Rx SA table");
return -1;
}
@@ -329,7 +329,7 @@ txgbe_crypto_remove_sa(struct rte_eth_dev *dev,
/* Fail if no match entries*/
if (sa_index < 0) {
PMD_DRV_LOG(ERR,
- "Entry not found in the Tx SA table\n");
+ "Entry not found in the Tx SA table");
return -1;
}
reg_val = TXGBE_IPSRXIDX_WRITE | (sa_index << 3);
@@ -359,7 +359,7 @@ txgbe_crypto_create_session(void *device,
if (conf->crypto_xform->type != RTE_CRYPTO_SYM_XFORM_AEAD ||
conf->crypto_xform->aead.algo !=
RTE_CRYPTO_AEAD_AES_GCM) {
- PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode\n");
+ PMD_DRV_LOG(ERR, "Unsupported crypto transformation mode");
return -ENOTSUP;
}
aead_xform = &conf->crypto_xform->aead;
@@ -368,14 +368,14 @@ txgbe_crypto_create_session(void *device,
if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_DECRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec decryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec decryption not enabled");
return -ENOTSUP;
}
} else {
if (dev_conf->txmode.offloads & RTE_ETH_TX_OFFLOAD_SECURITY) {
ic_session->op = TXGBE_OP_AUTHENTICATED_ENCRYPTION;
} else {
- PMD_DRV_LOG(ERR, "IPsec encryption not enabled\n");
+ PMD_DRV_LOG(ERR, "IPsec encryption not enabled");
return -ENOTSUP;
}
}
@@ -389,7 +389,7 @@ txgbe_crypto_create_session(void *device,
if (ic_session->op == TXGBE_OP_AUTHENTICATED_ENCRYPTION) {
if (txgbe_crypto_add_sa(ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to add SA\n");
+ PMD_DRV_LOG(ERR, "Failed to add SA");
return -EPERM;
}
}
@@ -411,12 +411,12 @@ txgbe_crypto_remove_session(void *device,
struct txgbe_crypto_session *ic_session = SECURITY_GET_SESS_PRIV(session);
if (eth_dev != ic_session->dev) {
- PMD_DRV_LOG(ERR, "Session not bound to this device\n");
+ PMD_DRV_LOG(ERR, "Session not bound to this device");
return -ENODEV;
}
if (txgbe_crypto_remove_sa(eth_dev, ic_session)) {
- PMD_DRV_LOG(ERR, "Failed to remove session\n");
+ PMD_DRV_LOG(ERR, "Failed to remove session");
return -EFAULT;
}
diff --git a/drivers/net/txgbe/txgbe_pf.c b/drivers/net/txgbe/txgbe_pf.c
index 176f79005c..700632bd88 100644
--- a/drivers/net/txgbe/txgbe_pf.c
+++ b/drivers/net/txgbe/txgbe_pf.c
@@ -85,7 +85,7 @@ int txgbe_pf_host_init(struct rte_eth_dev *eth_dev)
sizeof(struct txgbe_vf_info) * vf_num, 0);
if (*vfinfo == NULL) {
PMD_INIT_LOG(ERR,
- "Cannot allocate memory for private VF data\n");
+ "Cannot allocate memory for private VF data");
return -ENOMEM;
}
@@ -167,14 +167,14 @@ txgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
struct txgbe_ethertype_filter ethertype_filter;
if (!hw->mac.set_ethertype_anti_spoofing) {
- PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.\n");
+ PMD_DRV_LOG(INFO, "ether type anti-spoofing is not supported.");
return;
}
i = txgbe_ethertype_filter_lookup(filter_info,
TXGBE_ETHERTYPE_FLOW_CTRL);
if (i >= 0) {
- PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!\n");
+ PMD_DRV_LOG(ERR, "A ether type filter entity for flow control already exists!");
return;
}
@@ -187,7 +187,7 @@ txgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
i = txgbe_ethertype_filter_insert(filter_info,
ðertype_filter);
if (i < 0) {
- PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.\n");
+ PMD_DRV_LOG(ERR, "Cannot find an unused ether type filter entity for flow control.");
return;
}
@@ -408,7 +408,7 @@ txgbe_disable_vf_mc_promisc(struct rte_eth_dev *eth_dev, uint32_t vf)
vmolr = rd32(hw, TXGBE_POOLETHCTL(vf));
- PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous\n", vf);
+ PMD_DRV_LOG(INFO, "VF %u: disabling multicast promiscuous", vf);
vmolr &= ~TXGBE_POOLETHCTL_MCP;
@@ -570,7 +570,7 @@ txgbe_negotiate_vf_api(struct rte_eth_dev *eth_dev,
break;
}
- PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d\n",
+ PMD_DRV_LOG(ERR, "Negotiate invalid api version %u from VF %d",
api_version, vf);
return -1;
@@ -614,7 +614,7 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
case RTE_ETH_MQ_TX_NONE:
case RTE_ETH_MQ_TX_DCB:
PMD_DRV_LOG(ERR, "PF must work with virtualization for VF %u"
- ", but its tx mode = %d\n", vf,
+ ", but its tx mode = %d", vf,
eth_conf->txmode.mq_mode);
return -1;
@@ -648,7 +648,7 @@ txgbe_get_vf_queues(struct rte_eth_dev *eth_dev, uint32_t vf, uint32_t *msgbuf)
break;
default:
- PMD_DRV_LOG(ERR, "PF work with invalid mode = %d\n",
+ PMD_DRV_LOG(ERR, "PF work with invalid mode = %d",
eth_conf->txmode.mq_mode);
return -1;
}
@@ -704,7 +704,7 @@ txgbe_set_vf_mc_promisc(struct rte_eth_dev *eth_dev,
if (!(fctrl & TXGBE_PSRCTL_UCP)) {
/* VF promisc requires PF in promisc */
PMD_DRV_LOG(ERR,
- "Enabling VF promisc requires PF in promisc\n");
+ "Enabling VF promisc requires PF in promisc");
return -1;
}
@@ -741,7 +741,7 @@ txgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
if (index) {
if (!rte_is_valid_assigned_ether_addr(ea)) {
- PMD_DRV_LOG(ERR, "set invalid mac vf:%d\n", vf);
+ PMD_DRV_LOG(ERR, "set invalid mac vf:%d", vf);
return -1;
}
diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c
index 1bfd6aba80..d93d443ec9 100644
--- a/drivers/net/virtio/virtio_user/virtio_user_dev.c
+++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c
@@ -1088,7 +1088,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
scvq = virtqueue_alloc(&dev->hw, vq->vq_queue_index, vq->vq_nentries,
VTNET_CQ, SOCKET_ID_ANY, name);
if (!scvq) {
- PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq\n", dev->path);
+ PMD_INIT_LOG(ERR, "(%s) Failed to alloc shadow control vq", dev->path);
return -ENOMEM;
}
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 70ae9c6035..f98cdb6d58 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1094,10 +1094,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
ret = VMXNET3_READ_BAR1_REG(hw, VMXNET3_REG_CMD);
if (ret != 0)
PMD_INIT_LOG(DEBUG,
- "Failed in setup memory region cmd\n");
+ "Failed in setup memory region cmd");
ret = 0;
} else {
- PMD_INIT_LOG(DEBUG, "Failed to setup memory region\n");
+ PMD_INIT_LOG(DEBUG, "Failed to setup memory region");
}
} else {
PMD_INIT_LOG(WARNING, "Memregs can't init (rx: %d, tx: %d)",
diff --git a/drivers/net/vmxnet3/vmxnet3_rxtx.c b/drivers/net/vmxnet3/vmxnet3_rxtx.c
index 380f41f98b..e226641fdf 100644
--- a/drivers/net/vmxnet3/vmxnet3_rxtx.c
+++ b/drivers/net/vmxnet3/vmxnet3_rxtx.c
@@ -1341,7 +1341,7 @@ vmxnet3_dev_rxtx_init(struct rte_eth_dev *dev)
/* Zero number of descriptors in the configuration of the RX queue */
if (ret == 0) {
PMD_INIT_LOG(ERR,
- "Invalid configuration in Rx queue: %d, buffers ring: %d\n",
+ "Invalid configuration in Rx queue: %d, buffers ring: %d",
i, j);
return -EINVAL;
}
diff --git a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
index aeee4ac289..de8c024abb 100644
--- a/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
+++ b/drivers/raw/dpaa2_cmdif/dpaa2_cmdif.c
@@ -68,7 +68,7 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_CMDIF_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -99,14 +99,14 @@ dpaa2_cmdif_enqueue_bufs(struct rte_rawdev *dev,
do {
ret = qbman_swp_enqueue_multiple(swp, &eqdesc, &fd, NULL, 1);
if (ret < 0 && ret != -EBUSY)
- DPAA2_CMDIF_ERR("Transmit failure with err: %d\n", ret);
+ DPAA2_CMDIF_ERR("Transmit failure with err: %d", ret);
retry_count++;
} while ((ret == -EBUSY) && (retry_count < DPAA2_MAX_TX_RETRY_COUNT));
if (ret < 0)
return ret;
- DPAA2_CMDIF_DP_DEBUG("Successfully transmitted a packet\n");
+ DPAA2_CMDIF_DP_DEBUG("Successfully transmitted a packet");
return 1;
}
@@ -133,7 +133,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
ret = dpaa2_affine_qbman_swp();
if (ret) {
DPAA2_CMDIF_ERR(
- "Failed to allocate IO portal, tid: %d\n",
+ "Failed to allocate IO portal, tid: %d",
rte_gettid());
return 0;
}
@@ -152,7 +152,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
while (1) {
if (qbman_swp_pull(swp, &pulldesc)) {
- DPAA2_CMDIF_DP_WARN("VDQ cmd not issued. QBMAN is busy\n");
+ DPAA2_CMDIF_DP_WARN("VDQ cmd not issued. QBMAN is busy");
/* Portal was busy, try again */
continue;
}
@@ -169,7 +169,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
/* Check for valid frame. */
status = (uint8_t)qbman_result_DQ_flags(dq_storage);
if (unlikely((status & QBMAN_DQ_STAT_VALIDFRAME) == 0)) {
- DPAA2_CMDIF_DP_DEBUG("No frame is delivered\n");
+ DPAA2_CMDIF_DP_DEBUG("No frame is delivered");
return 0;
}
@@ -181,7 +181,7 @@ dpaa2_cmdif_dequeue_bufs(struct rte_rawdev *dev,
cmdif_rcv_cnxt->flc = DPAA2_GET_FD_FLC(fd);
cmdif_rcv_cnxt->frc = DPAA2_GET_FD_FRC(fd);
- DPAA2_CMDIF_DP_DEBUG("packet received\n");
+ DPAA2_CMDIF_DP_DEBUG("packet received");
return 1;
}
diff --git a/drivers/raw/ifpga/afu_pmd_n3000.c b/drivers/raw/ifpga/afu_pmd_n3000.c
index 67b3941265..6aae1b224e 100644
--- a/drivers/raw/ifpga/afu_pmd_n3000.c
+++ b/drivers/raw/ifpga/afu_pmd_n3000.c
@@ -1506,7 +1506,7 @@ static int dma_afu_set_irqs(struct afu_rawdev *dev, uint32_t vec_start,
rte_memcpy(&irq_set->data, efds, sizeof(*efds) * count);
ret = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set);
if (ret)
- IFPGA_RAWDEV_PMD_ERR("Error enabling MSI-X interrupts\n");
+ IFPGA_RAWDEV_PMD_ERR("Error enabling MSI-X interrupts");
rte_free(irq_set);
return ret;
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index f89bd3f9e2..997fbf8a0d 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -383,7 +383,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
goto fail;
if (value == 0xdeadbeef) {
- IFPGA_RAWDEV_PMD_DEBUG("dev_id %d sensor %s value %x\n",
+ IFPGA_RAWDEV_PMD_DEBUG("dev_id %d sensor %s value %x",
raw_dev->dev_id, sensor->name, value);
continue;
}
@@ -391,13 +391,13 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
/* monitor temperature sensors */
if (!strcmp(sensor->name, "Board Temperature") ||
!strcmp(sensor->name, "FPGA Die Temperature")) {
- IFPGA_RAWDEV_PMD_DEBUG("read sensor %s %d %d %d\n",
+ IFPGA_RAWDEV_PMD_DEBUG("read sensor %s %d %d %d",
sensor->name, value, sensor->high_warn,
sensor->high_fatal);
if (HIGH_WARN(sensor, value) ||
LOW_WARN(sensor, value)) {
- IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d\n",
+ IFPGA_RAWDEV_PMD_INFO("%s reach threshold %d",
sensor->name, value);
*gsd_start = true;
break;
@@ -408,7 +408,7 @@ ifpga_monitor_sensor(struct rte_rawdev *raw_dev,
if (!strcmp(sensor->name, "12V AUX Voltage")) {
if (value < AUX_VOLTAGE_WARN) {
IFPGA_RAWDEV_PMD_INFO(
- "%s reach threshold %d mV\n",
+ "%s reach threshold %d mV",
sensor->name, value);
*gsd_start = true;
break;
@@ -444,7 +444,7 @@ static int set_surprise_link_check_aer(
if (ifpga_monitor_sensor(rdev, &enable))
return -EFAULT;
if (enable || force_disable) {
- IFPGA_RAWDEV_PMD_ERR("Set AER, pls graceful shutdown\n");
+ IFPGA_RAWDEV_PMD_ERR("Set AER, pls graceful shutdown");
ifpga_rdev->aer_enable = 1;
/* get bridge fd */
strlcpy(path, "/sys/bus/pci/devices/", sizeof(path));
@@ -660,7 +660,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
continue;
if (ifpga_fill_afu_dev(acc, afu_dev)) {
- IFPGA_RAWDEV_PMD_ERR("cannot get info\n");
+ IFPGA_RAWDEV_PMD_ERR("cannot get info");
return -ENOENT;
}
}
@@ -815,13 +815,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
ret = opae_manager_flash(mgr, port_id, buffer, size, status);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("%s pr error %d\n", __func__, ret);
+ IFPGA_RAWDEV_PMD_ERR("%s pr error %d", __func__, ret);
return ret;
}
ret = opae_bridge_reset(br);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d\n",
+ IFPGA_RAWDEV_PMD_ERR("%s reset port:%d error %d",
__func__, port_id, ret);
return ret;
}
@@ -845,14 +845,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
file_fd = open(file_name, O_RDONLY);
if (file_fd < 0) {
- IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s\n",
+ IFPGA_RAWDEV_PMD_ERR("%s: open file error: %s",
__func__, file_name);
- IFPGA_RAWDEV_PMD_ERR("Message : %s\n", strerror(errno));
+ IFPGA_RAWDEV_PMD_ERR("Message : %s", strerror(errno));
return -EINVAL;
}
ret = stat(file_name, &file_stat);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s\n",
+ IFPGA_RAWDEV_PMD_ERR("stat on bitstream file failed: %s",
file_name);
ret = -EINVAL;
goto close_fd;
@@ -863,7 +863,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
goto close_fd;
}
- IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu\n", buffer_size);
+ IFPGA_RAWDEV_PMD_INFO("bitstream file size: %zu", buffer_size);
buffer = rte_malloc(NULL, buffer_size, 0);
if (!buffer) {
ret = -ENOMEM;
@@ -879,7 +879,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
/*do PR now*/
ret = fpga_pr(rawdev, port_id, buffer, buffer_size, &pr_error);
- IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.\n", port_id,
+ IFPGA_RAWDEV_PMD_INFO("downloading to device port %d....%s.", port_id,
ret ? "failed" : "success");
if (ret) {
ret = -EINVAL;
@@ -922,7 +922,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
afu_pr_conf->afu_id.port,
afu_pr_conf->bs_path);
if (ret) {
- IFPGA_RAWDEV_PMD_ERR("do pr error %d\n", ret);
+ IFPGA_RAWDEV_PMD_ERR("do pr error %d", ret);
return ret;
}
}
@@ -953,7 +953,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
rte_memcpy(&afu_pr_conf->afu_id.uuid.uuid_high, uuid.b + 8,
sizeof(u64));
- IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx\n",
+ IFPGA_RAWDEV_PMD_INFO("%s: uuid_l=0x%lx, uuid_h=0x%lx",
__func__,
(unsigned long)afu_pr_conf->afu_id.uuid.uuid_low,
(unsigned long)afu_pr_conf->afu_id.uuid.uuid_high);
@@ -1229,13 +1229,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
if (ret)
return -EINVAL;
- IFPGA_RAWDEV_PMD_INFO("seu emr low: 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_INFO("seu emr low: 0x%" PRIx64, val);
ret = ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_SEU_EMR_HIGH, &val);
if (ret)
return -EINVAL;
- IFPGA_RAWDEV_PMD_INFO("seu emr high: 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_INFO("seu emr high: 0x%" PRIx64, val);
return 0;
}
@@ -1250,7 +1250,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_NONFATAL_ERRORS, &val))
return -EINVAL;
if ((val & 0x40) != 0)
- IFPGA_RAWDEV_PMD_INFO("clean not done\n");
+ IFPGA_RAWDEV_PMD_INFO("clean not done");
return 0;
}
@@ -1262,14 +1262,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_ERRORS, &val))
return -EINVAL;
- IFPGA_RAWDEV_PMD_DEBUG("before clean 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_DEBUG("before clean 0x%" PRIx64, val);
ifpga_set_fme_error_prop(mgr, FME_ERR_PROP_CLEAR, val);
if (ifpga_get_fme_error_prop(mgr, FME_ERR_PROP_ERRORS, &val))
return -EINVAL;
- IFPGA_RAWDEV_PMD_DEBUG("after clean 0x%" PRIx64 "\n", val);
+ IFPGA_RAWDEV_PMD_DEBUG("after clean 0x%" PRIx64, val);
return 0;
}
@@ -1289,15 +1289,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
fme_error0.csr = val;
if (fme_error0.fabric_err)
- IFPGA_RAWDEV_PMD_ERR("Fabric error\n");
+ IFPGA_RAWDEV_PMD_ERR("Fabric error");
else if (fme_error0.fabfifo_overflow)
- IFPGA_RAWDEV_PMD_ERR("Fabric fifo under/overflow error\n");
+ IFPGA_RAWDEV_PMD_ERR("Fabric fifo under/overflow error");
else if (fme_error0.afu_acc_mode_err)
- IFPGA_RAWDEV_PMD_ERR("AFU PF/VF access mismatch detected\n");
+ IFPGA_RAWDEV_PMD_ERR("AFU PF/VF access mismatch detected");
else if (fme_error0.pcie0cdc_parity_err)
- IFPGA_RAWDEV_PMD_ERR("PCIe0 CDC Parity Error\n");
+ IFPGA_RAWDEV_PMD_ERR("PCIe0 CDC Parity Error");
else if (fme_error0.cvlcdc_parity_err)
- IFPGA_RAWDEV_PMD_ERR("CVL CDC Parity Error\n");
+ IFPGA_RAWDEV_PMD_ERR("CVL CDC Parity Error");
else if (fme_error0.fpgaseuerr)
fme_err_read_seu_emr(mgr);
@@ -1320,17 +1320,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
fme_catfatal.csr = val;
if (fme_catfatal.cci_fatal_err)
- IFPGA_RAWDEV_PMD_ERR("CCI error detected\n");
+ IFPGA_RAWDEV_PMD_ERR("CCI error detected");
else if (fme_catfatal.fabric_fatal_err)
- IFPGA_RAWDEV_PMD_ERR("Fabric fatal error detected\n");
+ IFPGA_RAWDEV_PMD_ERR("Fabric fatal error detected");
else if (fme_catfatal.pcie_poison_err)
- IFPGA_RAWDEV_PMD_ERR("Poison error from PCIe ports\n");
+ IFPGA_RAWDEV_PMD_ERR("Poison error from PCIe ports");
else if (fme_catfatal.inject_fata_err)
- IFPGA_RAWDEV_PMD_ERR("Injected Fatal Error\n");
+ IFPGA_RAWDEV_PMD_ERR("Injected Fatal Error");
else if (fme_catfatal.crc_catast_err)
- IFPGA_RAWDEV_PMD_ERR("a catastrophic EDCRC error\n");
+ IFPGA_RAWDEV_PMD_ERR("a catastrophic EDCRC error");
else if (fme_catfatal.injected_catast_err)
- IFPGA_RAWDEV_PMD_ERR("Injected Catastrophic Error\n");
+ IFPGA_RAWDEV_PMD_ERR("Injected Catastrophic Error");
else if (fme_catfatal.bmc_seu_catast_err)
fme_err_read_seu_emr(mgr);
@@ -1349,28 +1349,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
nonfaterr.csr = val;
if (nonfaterr.temp_thresh_ap1)
- IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP1\n");
+ IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP1");
else if (nonfaterr.temp_thresh_ap2)
- IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP2\n");
+ IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP2");
else if (nonfaterr.pcie_error)
- IFPGA_RAWDEV_PMD_INFO("an error has occurred in pcie\n");
+ IFPGA_RAWDEV_PMD_INFO("an error has occurred in pcie");
else if (nonfaterr.portfatal_error)
- IFPGA_RAWDEV_PMD_INFO("fatal error occurred in AFU port.\n");
+ IFPGA_RAWDEV_PMD_INFO("fatal error occurred in AFU port.");
else if (nonfaterr.proc_hot)
- IFPGA_RAWDEV_PMD_INFO("a ProcHot event\n");
+ IFPGA_RAWDEV_PMD_INFO("a ProcHot event");
else if (nonfaterr.afu_acc_mode_err)
- IFPGA_RAWDEV_PMD_INFO("an AFU PF/VF access mismatch\n");
+ IFPGA_RAWDEV_PMD_INFO("an AFU PF/VF access mismatch");
else if (nonfaterr.injected_nonfata_err) {
- IFPGA_RAWDEV_PMD_INFO("Injected Warning Error\n");
+ IFPGA_RAWDEV_PMD_INFO("Injected Warning Error");
fme_clear_warning_intr(mgr);
} else if (nonfaterr.temp_thresh_AP6)
- IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP6\n");
+ IFPGA_RAWDEV_PMD_INFO("Temperature threshold triggered AP6");
else if (nonfaterr.power_thresh_AP1)
- IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP1\n");
+ IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP1");
else if (nonfaterr.power_thresh_AP2)
- IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP2\n");
+ IFPGA_RAWDEV_PMD_INFO("Power threshold triggered AP2");
else if (nonfaterr.mbp_err)
- IFPGA_RAWDEV_PMD_INFO("an MBP event\n");
+ IFPGA_RAWDEV_PMD_INFO("an MBP event");
return 0;
}
@@ -1380,7 +1380,7 @@ fme_interrupt_handler(void *param)
{
struct opae_manager *mgr = (struct opae_manager *)param;
- IFPGA_RAWDEV_PMD_INFO("%s interrupt occurred\n", __func__);
+ IFPGA_RAWDEV_PMD_INFO("%s interrupt occurred", __func__);
fme_err_handle_error0(mgr);
fme_err_handle_nonfaterror(mgr);
@@ -1406,7 +1406,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
return -EINVAL;
if ((*intr_handle) == NULL) {
- IFPGA_RAWDEV_PMD_ERR("%s interrupt %d not registered\n",
+ IFPGA_RAWDEV_PMD_ERR("%s interrupt %d not registered",
type == IFPGA_FME_IRQ ? "FME" : "AFU",
type == IFPGA_FME_IRQ ? 0 : vec_start);
return -ENOENT;
@@ -1416,7 +1416,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
rc = rte_intr_callback_unregister(*intr_handle, handler, arg);
if (rc < 0) {
- IFPGA_RAWDEV_PMD_ERR("Failed to unregister %s interrupt %d\n",
+ IFPGA_RAWDEV_PMD_ERR("Failed to unregister %s interrupt %d",
type == IFPGA_FME_IRQ ? "FME" : "AFU",
type == IFPGA_FME_IRQ ? 0 : vec_start);
} else {
@@ -1479,7 +1479,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
rte_intr_efds_index_get(*intr_handle, 0)))
return -rte_errno;
- IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d\n",
+ IFPGA_RAWDEV_PMD_DEBUG("register %s irq, vfio_fd=%d, fd=%d",
name, rte_intr_dev_fd_get(*intr_handle),
rte_intr_fd_get(*intr_handle));
@@ -1520,7 +1520,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
return -EINVAL;
}
- IFPGA_RAWDEV_PMD_INFO("success register %s interrupt\n", name);
+ IFPGA_RAWDEV_PMD_INFO("success register %s interrupt", name);
free(intr_efds);
return 0;
diff --git a/drivers/regex/cn9k/cn9k_regexdev.c b/drivers/regex/cn9k/cn9k_regexdev.c
index e96cbf4141..aa809ab5bf 100644
--- a/drivers/regex/cn9k/cn9k_regexdev.c
+++ b/drivers/regex/cn9k/cn9k_regexdev.c
@@ -192,7 +192,7 @@ ree_dev_register(const char *name)
{
struct rte_regexdev *dev;
- cn9k_ree_dbg("Creating regexdev %s\n", name);
+ cn9k_ree_dbg("Creating regexdev %s", name);
/* allocate device structure */
dev = rte_regexdev_register(name);
diff --git a/drivers/vdpa/ifc/ifcvf_vdpa.c b/drivers/vdpa/ifc/ifcvf_vdpa.c
index f034bd59ba..2958368813 100644
--- a/drivers/vdpa/ifc/ifcvf_vdpa.c
+++ b/drivers/vdpa/ifc/ifcvf_vdpa.c
@@ -536,7 +536,7 @@ notify_relay(void *arg)
if (nfds < 0) {
if (errno == EINTR)
continue;
- DRV_LOG(ERR, "epoll_wait return fail\n");
+ DRV_LOG(ERR, "epoll_wait return fail");
return 1;
}
@@ -651,12 +651,12 @@ intr_relay(void *arg)
errno == EWOULDBLOCK ||
errno == EAGAIN)
continue;
- DRV_LOG(ERR, "Error reading from file descriptor %d: %s\n",
+ DRV_LOG(ERR, "Error reading from file descriptor %d: %s",
csc_event.data.fd,
strerror(errno));
goto out;
} else if (nbytes == 0) {
- DRV_LOG(ERR, "Read nothing from file descriptor %d\n",
+ DRV_LOG(ERR, "Read nothing from file descriptor %d",
csc_event.data.fd);
continue;
} else {
@@ -1500,7 +1500,7 @@ ifcvf_pci_get_device_type(struct rte_pci_device *pci_dev)
uint16_t device_id;
if (pci_device_id < 0x1000 || pci_device_id > 0x107f) {
- DRV_LOG(ERR, "Probe device is not a virtio device\n");
+ DRV_LOG(ERR, "Probe device is not a virtio device");
return -1;
}
@@ -1577,7 +1577,7 @@ ifcvf_blk_get_config(int vid, uint8_t *config, uint32_t size)
DRV_LOG(DEBUG, " sectors : %u", dev_cfg->geometry.sectors);
DRV_LOG(DEBUG, "num_queues: 0x%08x", dev_cfg->num_queues);
- DRV_LOG(DEBUG, "config: [%x] [%x] [%x] [%x] [%x] [%x] [%x] [%x]\n",
+ DRV_LOG(DEBUG, "config: [%x] [%x] [%x] [%x] [%x] [%x] [%x] [%x]",
config[0], config[1], config[2], config[3], config[4],
config[5], config[6], config[7]);
return 0;
diff --git a/drivers/vdpa/nfp/nfp_vdpa.c b/drivers/vdpa/nfp/nfp_vdpa.c
index cef80b5476..3e4247dbcb 100644
--- a/drivers/vdpa/nfp/nfp_vdpa.c
+++ b/drivers/vdpa/nfp/nfp_vdpa.c
@@ -127,7 +127,7 @@ nfp_vdpa_vfio_setup(struct nfp_vdpa_dev *device)
if (device->vfio_group_fd < 0)
goto container_destroy;
- DRV_VDPA_LOG(DEBUG, "container_fd=%d, group_fd=%d,\n",
+ DRV_VDPA_LOG(DEBUG, "container_fd=%d, group_fd=%d,",
device->vfio_container_fd, device->vfio_group_fd);
ret = rte_pci_map_device(pci_dev);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.530150931 +0800
+++ 0003-drivers-remove-redundant-newline-from-logs.patch 2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From f665790a5dbad7b645ff46f31d65e977324e7bfc Mon Sep 17 00:00:00 2001
+From 5b424bd34d8c972d428d03bc9952528d597e2040 Mon Sep 17 00:00:00 2001
@@ -4,0 +5 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
@@ -6 +7 @@
-Fix places where two newline characters may be logged.
+[ upstream commit f665790a5dbad7b645ff46f31d65e977324e7bfc ]
@@ -8 +9 @@
-Cc: stable@dpdk.org
+Fix places where two newline characters may be logged.
@@ -13 +14 @@
- drivers/baseband/acc/rte_acc100_pmd.c | 20 +-
+ drivers/baseband/acc/rte_acc100_pmd.c | 22 +-
@@ -15 +16 @@
- .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 16 +-
+ .../fpga_5gnr_fec/rte_fpga_5gnr_fec.c | 14 +-
@@ -53 +54 @@
- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 44 ++--
+ drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 42 ++--
@@ -55 +56 @@
- drivers/crypto/dpaa_sec/dpaa_sec.c | 27 ++-
+ drivers/crypto/dpaa_sec/dpaa_sec.c | 24 +-
@@ -99,3 +99,0 @@
- drivers/net/cnxk/cn10k_ethdev.c | 2 +-
- drivers/net/cnxk/cn9k_ethdev.c | 2 +-
- drivers/net/cnxk/cnxk_eswitch_devargs.c | 2 +-
@@ -105,3 +102,0 @@
- drivers/net/cnxk/cnxk_rep.c | 8 +-
- drivers/net/cnxk/cnxk_rep.h | 2 +-
- drivers/net/cpfl/cpfl_ethdev.c | 2 +-
@@ -128,3 +123 @@
- drivers/net/gve/base/gve_adminq.c | 4 +-
- drivers/net/gve/gve_rx.c | 2 +-
- drivers/net/gve/gve_tx.c | 2 +-
+ drivers/net/gve/base/gve_adminq.c | 2 +-
@@ -139 +132 @@
- drivers/net/i40e/i40e_ethdev.c | 51 ++--
+ drivers/net/i40e/i40e_ethdev.c | 37 ++-
@@ -141 +134 @@
- drivers/net/i40e/i40e_rxtx.c | 42 ++--
+ drivers/net/i40e/i40e_rxtx.c | 24 +-
@@ -143 +136 @@
- drivers/net/iavf/iavf_rxtx.c | 16 +-
+ drivers/net/iavf/iavf_rxtx.c | 2 +-
@@ -146 +139 @@
- drivers/net/ice/ice_ethdev.c | 50 ++--
+ drivers/net/ice/ice_ethdev.c | 44 ++--
@@ -149 +142 @@
- drivers/net/ice/ice_rxtx.c | 18 +-
+ drivers/net/ice/ice_rxtx.c | 2 +-
@@ -168 +161 @@
- drivers/net/octeon_ep/otx_ep_ethdev.c | 82 +++----
+ drivers/net/octeon_ep/otx_ep_ethdev.c | 80 +++----
@@ -183 +175,0 @@
- drivers/net/virtio/virtio_user/vhost_vdpa.c | 2 +-
@@ -193 +185 @@
- 180 files changed, 1244 insertions(+), 1262 deletions(-)
+ 171 files changed, 1194 insertions(+), 1211 deletions(-)
@@ -196 +188 @@
-index ab69350080..5c91acab7e 100644
+index 292537e24d..9d028f0f48 100644
@@ -199 +191 @@
-@@ -229,7 +229,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
+@@ -230,7 +230,7 @@ fetch_acc100_config(struct rte_bbdev *dev)
@@ -208 +200,10 @@
-@@ -2672,7 +2672,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -1229,7 +1229,7 @@ acc100_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
+ harq_in_length = RTE_ALIGN_FLOOR(harq_in_length, ACC100_HARQ_ALIGN_COMP);
+
+ if ((harq_layout[harq_index].offset > 0) && harq_prun) {
+- rte_bbdev_log_debug("HARQ IN offset unexpected for now\n");
++ rte_bbdev_log_debug("HARQ IN offset unexpected for now");
+ fcw->hcin_size0 = harq_layout[harq_index].size0;
+ fcw->hcin_offset = harq_layout[harq_index].offset;
+ fcw->hcin_size1 = harq_in_length - harq_layout[harq_index].offset;
+@@ -2890,7 +2890,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -217 +218 @@
-@@ -2710,7 +2710,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -2928,7 +2928,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -226 +227 @@
-@@ -2726,7 +2726,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
+@@ -2944,7 +2944,7 @@ harq_loopback(struct acc_queue *q, struct rte_bbdev_dec_op *op,
@@ -235,3 +236 @@
-@@ -3450,7 +3450,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
- }
- avail -= 1;
+@@ -3691,7 +3691,7 @@ acc100_enqueue_ldpc_dec_cb(struct rte_bbdev_queue_data *q_data,
@@ -239,2 +238,4 @@
-- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d\n",
-+ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d",
+ if (i > 0)
+ same_op = cmp_ldpc_dec_op(&ops[i-1]);
+- rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d\n",
++ rte_bbdev_log(INFO, "Op %d %d %d %d %d %d %d %d %d %d %d %d",
@@ -244 +245 @@
-@@ -3566,7 +3566,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
+@@ -3808,7 +3808,7 @@ dequeue_enc_one_op_cb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
@@ -253,3 +254,3 @@
-@@ -3643,7 +3643,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
- atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc,
- rte_memory_order_relaxed);
+@@ -3885,7 +3885,7 @@ dequeue_enc_one_op_tb(struct acc_queue *q, struct rte_bbdev_enc_op **ref_op,
+ atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+ __ATOMIC_RELAXED);
@@ -262 +263 @@
-@@ -3739,7 +3739,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+@@ -3981,7 +3981,7 @@ dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -271,3 +272,3 @@
-@@ -3818,7 +3818,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
- atom_desc.atom_hdr = rte_atomic_load_explicit((uint64_t __rte_atomic *)desc,
- rte_memory_order_relaxed);
+@@ -4060,7 +4060,7 @@ dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
+ atom_desc.atom_hdr = __atomic_load_n((uint64_t *)desc,
+ __ATOMIC_RELAXED);
@@ -280 +281 @@
-@@ -4552,7 +4552,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -4797,7 +4797,7 @@ acc100_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -290 +291 @@
-index 585dc49bd6..fad984ccc1 100644
+index 686e086a5c..88e1d03ebf 100644
@@ -365 +366 @@
-@@ -3304,7 +3304,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
+@@ -3319,7 +3319,7 @@ vrb_dequeue_ldpc_dec_one_op_cb(struct rte_bbdev_queue_data *q_data,
@@ -374 +375 @@
-@@ -3411,7 +3411,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
+@@ -3440,7 +3440,7 @@ vrb_dequeue_dec_one_op_tb(struct acc_queue *q, struct rte_bbdev_dec_op **ref_op,
@@ -383 +384 @@
-@@ -3946,7 +3946,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
+@@ -3985,7 +3985,7 @@ vrb2_check_mld_r_constraint(struct rte_bbdev_mldts_op *op) {
@@ -392 +393 @@
-@@ -4606,7 +4606,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -4650,7 +4650,7 @@ vrb1_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -401 +402 @@
-@@ -4976,7 +4976,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
+@@ -5020,7 +5020,7 @@ vrb2_configure(const char *dev_name, struct rte_acc_conf *conf)
@@ -411 +412 @@
-index 9b253cde28..3e04e44ba2 100644
+index 6b0644ffc5..d60cd3a5c5 100644
@@ -414 +415 @@
-@@ -1997,10 +1997,10 @@ fpga_5gnr_mutex_acquisition(struct fpga_5gnr_queue *q)
+@@ -1498,14 +1498,14 @@ fpga_mutex_acquisition(struct fpga_queue *q)
@@ -417,5 +418,9 @@
- usleep(FPGA_5GNR_TIMEOUT_CHECK_INTERVAL);
-- rte_bbdev_log_debug("Acquiring Mutex for %x\n", q->ddr_mutex_uuid);
-+ rte_bbdev_log_debug("Acquiring Mutex for %x", q->ddr_mutex_uuid);
- fpga_5gnr_reg_write_32(q->d->mmio_base, FPGA_5GNR_FEC_MUTEX, mutex_ctrl);
- mutex_read = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_MUTEX);
+ usleep(FPGA_TIMEOUT_CHECK_INTERVAL);
+- rte_bbdev_log_debug("Acquiring Mutex for %x\n",
++ rte_bbdev_log_debug("Acquiring Mutex for %x",
+ q->ddr_mutex_uuid);
+ fpga_reg_write_32(q->d->mmio_base,
+ FPGA_5GNR_FEC_MUTEX,
+ mutex_ctrl);
+ mutex_read = fpga_reg_read_32(q->d->mmio_base,
+ FPGA_5GNR_FEC_MUTEX);
@@ -427,7 +432,6 @@
-@@ -2038,7 +2038,7 @@ fpga_5gnr_harq_write_loopback(struct fpga_5gnr_queue *q,
- reg_32 = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
- if (reg_32 < harq_in_length) {
- left_length = reg_32;
-- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
-+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
- }
+@@ -1546,7 +1546,7 @@ fpga_harq_write_loopback(struct fpga_queue *q,
+ FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
+ if (reg_32 < harq_in_length) {
+ left_length = reg_32;
+- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
++ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
@@ -436,7 +440,7 @@
-@@ -2108,17 +2108,17 @@ fpga_5gnr_harq_read_loopback(struct fpga_5gnr_queue *q,
- reg = fpga_5gnr_reg_read_32(q->d->mmio_base, FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
- if (reg < harq_in_length) {
- harq_in_length = reg;
-- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
-+ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
- }
+ input = (uint64_t *)rte_pktmbuf_mtod_offset(harq_input,
+@@ -1609,18 +1609,18 @@ fpga_harq_read_loopback(struct fpga_queue *q,
+ FPGA_5GNR_FEC_HARQ_BUF_SIZE_REGS);
+ if (reg < harq_in_length) {
+ harq_in_length = reg;
+- rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size\n");
++ rte_bbdev_log(ERR, "HARQ in length > HARQ buffer size");
@@ -448 +452,2 @@
- harq_output->buf_len - rte_pktmbuf_headroom(harq_output),
+ harq_output->buf_len -
+ rte_pktmbuf_headroom(harq_output),
@@ -450 +455,2 @@
- harq_in_length = harq_output->buf_len - rte_pktmbuf_headroom(harq_output);
+ harq_in_length = harq_output->buf_len -
+ rte_pktmbuf_headroom(harq_output);
@@ -457,6 +463,6 @@
-@@ -2142,7 +2142,7 @@ fpga_5gnr_harq_read_loopback(struct fpga_5gnr_queue *q,
- while (reg != 1) {
- reg = fpga_5gnr_reg_read_8(q->d->mmio_base, FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
- if (reg == FPGA_5GNR_DDR_OVERFLOW) {
-- rte_bbdev_log(ERR, "Read address is overflow!\n");
-+ rte_bbdev_log(ERR, "Read address is overflow!");
+@@ -1642,7 +1642,7 @@ fpga_harq_read_loopback(struct fpga_queue *q,
+ FPGA_5GNR_FEC_DDR4_RD_RDY_REGS);
+ if (reg == FPGA_DDR_OVERFLOW) {
+ rte_bbdev_log(ERR,
+- "Read address is overflow!\n");
++ "Read address is overflow!");
@@ -466,9 +471,0 @@
-@@ -3376,7 +3376,7 @@ int rte_fpga_5gnr_fec_configure(const char *dev_name, const struct rte_fpga_5gnr
- return -ENODEV;
- }
- struct rte_pci_device *pci_dev = RTE_DEV_TO_PCI(bbdev->device);
-- rte_bbdev_log(INFO, "Configure dev id %x\n", pci_dev->id.device_id);
-+ rte_bbdev_log(INFO, "Configure dev id %x", pci_dev->id.device_id);
- if (pci_dev->id.device_id == VC_5GNR_PF_DEVICE_ID)
- return vc_5gnr_configure(dev_name, conf);
- else if (pci_dev->id.device_id == AGX100_PF_DEVICE_ID)
@@ -498 +495 @@
-index 574743a9da..1f661dd801 100644
+index 8ddc7ff05f..a66dcd8962 100644
@@ -574 +571 @@
-index c155f4a2fd..097d6dca08 100644
+index 89f0f329c0..adb452fd3e 100644
@@ -577 +574 @@
-@@ -500,7 +500,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+@@ -499,7 +499,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
@@ -586 +583 @@
-@@ -515,7 +515,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
+@@ -514,7 +514,7 @@ rte_fslmc_find_device(const struct rte_device *start, rte_dev_cmp_t cmp,
@@ -595 +592 @@
-@@ -629,14 +629,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
+@@ -628,14 +628,14 @@ fslmc_bus_dev_iterate(const void *start, const char *str,
@@ -613 +610 @@
-index e12fd62f34..6981679a2d 100644
+index 5966776a85..b90efeb651 100644
@@ -748 +745 @@
-index daf7684d8e..438ac72563 100644
+index 14aff233d5..35eb8b7628 100644
@@ -751 +748 @@
-@@ -1564,7 +1564,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
+@@ -1493,7 +1493,7 @@ dev_init(struct dev *dev, struct plt_pci_device *pci_dev)
@@ -868 +865 @@
-index ac522f8235..d890fad681 100644
+index 9e5e614b3b..92401e04d0 100644
@@ -871 +868 @@
-@@ -908,7 +908,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
+@@ -906,7 +906,7 @@ nix_tm_sq_flush_pre(struct roc_nix_sq *sq)
@@ -894 +891 @@
-index 9f3870a311..e24826bb5d 100644
+index e1cef7a670..c1b91ad92f 100644
@@ -897 +894 @@
-@@ -504,7 +504,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
+@@ -503,7 +503,7 @@ roc_nix_tm_hierarchy_disable(struct roc_nix *roc_nix)
@@ -920 +917 @@
-index 293b0c81a1..499f93e373 100644
+index 748d287bad..b02c9c7f38 100644
@@ -923 +920 @@
-@@ -186,7 +186,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
+@@ -171,7 +171,7 @@ sso_rsrc_get(struct roc_sso *roc_sso)
@@ -933 +930 @@
-index 095afbb9e6..83228fb2b6 100644
+index f8607b2852..d39af3c85e 100644
@@ -936 +933 @@
-@@ -342,7 +342,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
+@@ -317,7 +317,7 @@ tim_free_lf_count_get(struct dev *dev, uint16_t *nb_lfs)
@@ -946 +943 @@
-index 87a3ac80b9..636f93604e 100644
+index b393be4cf6..2e6846312b 100644
@@ -968 +965 @@
-index e638c616d8..561836760c 100644
+index f6be84ceb5..105450774e 100644
@@ -971 +968,2 @@
-@@ -10,7 +10,7 @@
+@@ -9,7 +9,7 @@
+
@@ -973 +970,0 @@
- #define RTE_LOGTYPE_IDPF_COMMON idpf_common_logtype
@@ -980 +977 @@
-@@ -18,9 +18,6 @@ extern int idpf_common_logtype;
+@@ -17,9 +17,6 @@ extern int idpf_common_logtype;
@@ -1035 +1032 @@
-index ad44b0e01f..4bf9bac23e 100644
+index f95dd33375..21a110d22e 100644
@@ -1366 +1363 @@
-index bb19854b50..7353fd4957 100644
+index 7391360925..d52f937548 100644
@@ -1659 +1656 @@
-index 6ed7a8f41c..27cdbf5ed4 100644
+index b55258689b..1713600db7 100644
@@ -1799 +1796 @@
-index 0dcf971a15..8956f7750d 100644
+index 583ba3b523..acb40bdf77 100644
@@ -1830 +1827 @@
-index 0c800fc350..5088d8ded6 100644
+index b7ca3af5a4..6d42b92d8b 100644
@@ -1843 +1840 @@
-index ca99bc6f42..700e141667 100644
+index a5271d7227..c92fdb446d 100644
@@ -1846 +1843 @@
-@@ -227,7 +227,7 @@ cryptodev_ccp_create(const char *name,
+@@ -228,7 +228,7 @@ cryptodev_ccp_create(const char *name,
@@ -1856 +1853 @@
-index dbd36a8a54..32415e815e 100644
+index c2a807fa94..cf163e0208 100644
@@ -1859 +1856 @@
-@@ -1953,7 +1953,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+@@ -1952,7 +1952,7 @@ fill_sess_aead(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
@@ -1868 +1865 @@
-@@ -2037,7 +2037,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
+@@ -2036,7 +2036,7 @@ fill_sm_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *ses
@@ -1877 +1874 @@
-@@ -2114,7 +2114,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
+@@ -2113,7 +2113,7 @@ fill_sess_cipher(struct rte_crypto_sym_xform *xform, struct cnxk_se_sess *sess)
@@ -1887 +1884 @@
-index c1f7181d55..99b6359e52 100644
+index 6ae356ace0..b65bea3b3f 100644
@@ -1959 +1956 @@
-@@ -1447,7 +1447,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1446,7 +1446,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1968 +1965 @@
-@@ -1476,7 +1476,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1475,7 +1475,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1977 +1974 @@
-@@ -1494,7 +1494,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1493,7 +1493,7 @@ dpaa2_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -1986 +1983 @@
-@@ -1570,7 +1570,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
+@@ -1569,7 +1569,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
@@ -1995 +1992 @@
-@@ -1603,7 +1603,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
+@@ -1602,7 +1602,7 @@ sec_fd_to_mbuf(const struct qbman_fd *fd, struct dpaa2_sec_qp *qp)
@@ -2004 +2001 @@
-@@ -1825,7 +1825,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
+@@ -1824,7 +1824,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
@@ -2013 +2010 @@
-@@ -1842,7 +1842,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
+@@ -1841,7 +1841,7 @@ dpaa2_sec_enqueue_burst_ordered(void *qp, struct rte_crypto_op **ops,
@@ -2022 +2019 @@
-@@ -1885,7 +1885,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1884,7 +1884,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2031 +2028 @@
-@@ -1938,7 +1938,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1937,7 +1937,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2040 +2037 @@
-@@ -1949,7 +1949,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -1948,7 +1948,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2048,2 +2045,2 @@
- dpaa2_sec_dump(ops[num_rx], stdout);
-@@ -1967,7 +1967,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+ dpaa2_sec_dump(ops[num_rx]);
+@@ -1966,7 +1966,7 @@ dpaa2_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2058,10 +2055 @@
-@@ -2017,7 +2017,7 @@ dpaa2_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
-
- if (qp_conf->nb_descriptors < (2 * FLE_POOL_CACHE_SIZE)) {
- DPAA2_SEC_ERR("Minimum supported nb_descriptors %d,"
-- " but given %d\n", (2 * FLE_POOL_CACHE_SIZE),
-+ " but given %d", (2 * FLE_POOL_CACHE_SIZE),
- qp_conf->nb_descriptors);
- return -EINVAL;
- }
-@@ -2544,7 +2544,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
+@@ -2555,7 +2555,7 @@ dpaa2_sec_aead_init(struct rte_crypto_sym_xform *xform,
@@ -2076 +2064 @@
-@@ -4254,7 +4254,7 @@ check_devargs_handler(const char *key, const char *value,
+@@ -4275,7 +4275,7 @@ check_devargs_handler(const char *key, const char *value,
@@ -2162 +2150 @@
-index 1ddad6944e..225bf950e9 100644
+index 906ea39047..131cd90c94 100644
@@ -2174 +2162 @@
-@@ -851,7 +851,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
+@@ -849,7 +849,7 @@ dpaa_sec_deq(struct dpaa_sec_qp *qp, struct rte_crypto_op **ops, int nb_ops)
@@ -2182,2 +2170,2 @@
- dpaa_sec_dump(ctx, qp, stdout);
-@@ -1946,7 +1946,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+ dpaa_sec_dump(ctx, qp);
+@@ -1944,7 +1944,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2192 +2180 @@
-@@ -2056,7 +2056,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -2054,7 +2054,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2201 +2189 @@
-@@ -2097,7 +2097,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
+@@ -2095,7 +2095,7 @@ dpaa_sec_dequeue_burst(void *qp, struct rte_crypto_op **ops,
@@ -2210 +2198 @@
-@@ -2160,7 +2160,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
+@@ -2158,7 +2158,7 @@ dpaa_sec_queue_pair_setup(struct rte_cryptodev *dev, uint16_t qp_id,
@@ -2219 +2207 @@
-@@ -2466,7 +2466,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
+@@ -2459,7 +2459,7 @@ dpaa_sec_aead_init(struct rte_cryptodev *dev __rte_unused,
@@ -2228 +2216,2 @@
-@@ -2517,9 +2517,8 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
+@@ -2508,7 +2508,7 @@ dpaa_sec_detach_rxq(struct dpaa_sec_dev_private *qi, struct qman_fq *fq)
+ for (i = 0; i < RTE_DPAA_MAX_RX_QUEUE; i++) {
@@ -2230,7 +2219,3 @@
- ret = qman_retire_fq(fq, NULL);
- if (ret != 0)
-- DPAA_SEC_ERR("Queue %d is not retired"
-- " err: %d\n", fq->fqid,
-- ret);
-+ DPAA_SEC_ERR("Queue %d is not retired err: %d",
-+ fq->fqid, ret);
+ if (qman_retire_fq(fq, NULL) != 0)
+- DPAA_SEC_DEBUG("Queue is not retired\n");
++ DPAA_SEC_DEBUG("Queue is not retired");
@@ -2240 +2225 @@
-@@ -3475,7 +3474,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
+@@ -3483,7 +3483,7 @@ dpaa_sec_eventq_attach(const struct rte_cryptodev *dev,
@@ -2249 +2234 @@
-@@ -3574,7 +3573,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
+@@ -3582,7 +3582,7 @@ check_devargs_handler(__rte_unused const char *key, const char *value,
@@ -2258 +2243 @@
-@@ -3637,7 +3636,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
+@@ -3645,7 +3645,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
@@ -2267 +2252 @@
-@@ -3705,7 +3704,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
+@@ -3713,7 +3713,7 @@ dpaa_sec_dev_init(struct rte_cryptodev *cryptodev)
@@ -2277 +2262 @@
-index f8c85b6528..60dbaee4ec 100644
+index fb895a8bc6..82ac1fa1c4 100644
@@ -2280 +2265 @@
-@@ -30,7 +30,7 @@ extern int dpaa_logtype_sec;
+@@ -29,7 +29,7 @@ extern int dpaa_logtype_sec;
@@ -2284,2 +2269,2 @@
-- RTE_LOG_DP(level, DPAA_SEC, fmt, ## args)
-+ RTE_LOG_DP_LINE(level, DPAA_SEC, fmt, ## args)
+- RTE_LOG_DP(level, PMD, fmt, ## args)
++ RTE_LOG_DP_LINE(level, PMD, fmt, ## args)
@@ -2343 +2328 @@
-index be6dbe9b1b..d42acd913c 100644
+index 52722f94a0..252bcb3192 100644
@@ -2356 +2341 @@
-index ef4228bd38..f3633091a9 100644
+index 80de25c65b..8e74645e0a 100644
@@ -2359 +2344 @@
-@@ -113,7 +113,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -107,7 +107,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2368 +2353 @@
-@@ -136,7 +136,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -130,7 +130,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2377 +2362 @@
-@@ -171,7 +171,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -165,7 +165,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2386 +2371 @@
-@@ -198,7 +198,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -192,7 +192,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2395 +2380 @@
-@@ -211,7 +211,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -205,7 +205,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2404 +2389 @@
-@@ -223,11 +223,11 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -217,11 +217,11 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2418 +2403 @@
-@@ -243,7 +243,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -237,7 +237,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2427 +2412 @@
-@@ -258,7 +258,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -252,7 +252,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2436 +2421 @@
-@@ -389,7 +389,7 @@ aesni_mb_set_session_auth_parameters(IMB_MGR *mb_mgr,
+@@ -361,7 +361,7 @@ aesni_mb_set_session_auth_parameters(const IMB_MGR *mb_mgr,
@@ -2445 +2430 @@
-@@ -725,7 +725,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -691,7 +691,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2454 +2439 @@
-@@ -761,7 +761,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -727,7 +727,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2463 +2448 @@
-@@ -782,7 +782,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
+@@ -748,7 +748,7 @@ aesni_mb_set_session_aead_parameters(const IMB_MGR *mb_mgr,
@@ -2472 +2457 @@
-@@ -1234,7 +1234,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
+@@ -1200,7 +1200,7 @@ handle_sgl_linear(IMB_JOB *job, struct rte_crypto_op *op, uint32_t dst_offset,
@@ -2482 +2467 @@
-index a96779f059..65f0e5c568 100644
+index e64df1a462..a0b354bb83 100644
@@ -2504 +2489 @@
-index 6e2afde34f..3104e6d31e 100644
+index 4647d568de..aa2363ef15 100644
@@ -2916 +2901 @@
-index 491f5ecd5b..e43884e69b 100644
+index 2bf3060278..5d240a3de1 100644
@@ -2919 +2904 @@
-@@ -1531,7 +1531,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev)
+@@ -1520,7 +1520,7 @@ qat_asym_dev_create(struct qat_pci_device *qat_pci_dev,
@@ -2926,2 +2911,2 @@
- if (qat_pci_dev->qat_dev_gen == QAT_VQAT &&
- sub_id != ADF_VQAT_ASYM_PCI_SUBSYSTEM_ID) {
+ if (gen_dev_ops->cryptodev_ops == NULL) {
+ QAT_LOG(ERR, "Device %s does not support asymmetric crypto",
@@ -2929 +2914 @@
-index eb267db424..50d687fd37 100644
+index 9f4f6c3d93..224cc0ab50 100644
@@ -2932 +2917 @@
-@@ -581,7 +581,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
+@@ -569,7 +569,7 @@ qat_sym_session_configure_cipher(struct rte_cryptodev *dev,
@@ -2941 +2926 @@
-@@ -1180,7 +1180,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
+@@ -1073,7 +1073,7 @@ qat_sym_session_configure_aead(struct rte_cryptodev *dev,
@@ -2950 +2935 @@
-@@ -1805,7 +1805,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
+@@ -1676,7 +1676,7 @@ static int aes_ipsecmb_job(uint8_t *in, uint8_t *out, IMB_MGR *m,
@@ -2959 +2944 @@
-@@ -2657,10 +2657,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
+@@ -2480,10 +2480,8 @@ int qat_sym_cd_auth_set(struct qat_sym_session *cdesc,
@@ -3187 +3172 @@
-index 7cd6ebc1e0..bce4b4b277 100644
+index 4db3b0554c..8bc076f5d5 100644
@@ -3190 +3175 @@
-@@ -357,7 +357,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
+@@ -358,7 +358,7 @@ hisi_dma_start(struct rte_dma_dev *dev)
@@ -3199 +3184 @@
-@@ -630,7 +630,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
+@@ -631,7 +631,7 @@ hisi_dma_scan_cq(struct hisi_dma_dev *hw)
@@ -3208 +3193 @@
-@@ -912,7 +912,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -913,7 +913,7 @@ hisi_dma_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -3231 +3216 @@
-index 81637d9420..60ac219559 100644
+index a78889a7ef..2ee78773bb 100644
@@ -3234 +3219 @@
-@@ -324,7 +324,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -323,7 +323,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3243 +3228 @@
-@@ -339,7 +339,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -338,7 +338,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3252 +3237 @@
-@@ -365,7 +365,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
+@@ -364,7 +364,7 @@ idxd_dmadev_probe_pci(struct rte_pci_driver *drv, struct rte_pci_device *dev)
@@ -3336 +3321 @@
-index f0a4998bdd..c43ab864ca 100644
+index 5044cb17ef..9dc5edb3fb 100644
@@ -3339 +3324 @@
-@@ -171,7 +171,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
+@@ -168,7 +168,7 @@ dlb2_hw_query_resources(struct dlb2_eventdev *dlb2)
@@ -3348 +3333 @@
-@@ -259,7 +259,7 @@ set_producer_coremask(const char *key __rte_unused,
+@@ -256,7 +256,7 @@ set_producer_coremask(const char *key __rte_unused,
@@ -3357 +3342 @@
-@@ -293,7 +293,7 @@ set_max_cq_depth(const char *key __rte_unused,
+@@ -290,7 +290,7 @@ set_max_cq_depth(const char *key __rte_unused,
@@ -3366 +3351 @@
-@@ -304,7 +304,7 @@ set_max_cq_depth(const char *key __rte_unused,
+@@ -301,7 +301,7 @@ set_max_cq_depth(const char *key __rte_unused,
@@ -3375 +3360 @@
-@@ -322,7 +322,7 @@ set_max_enq_depth(const char *key __rte_unused,
+@@ -319,7 +319,7 @@ set_max_enq_depth(const char *key __rte_unused,
@@ -3384 +3369 @@
-@@ -333,7 +333,7 @@ set_max_enq_depth(const char *key __rte_unused,
+@@ -330,7 +330,7 @@ set_max_enq_depth(const char *key __rte_unused,
@@ -3393 +3378 @@
-@@ -351,7 +351,7 @@ set_max_num_events(const char *key __rte_unused,
+@@ -348,7 +348,7 @@ set_max_num_events(const char *key __rte_unused,
@@ -3402 +3387 @@
-@@ -361,7 +361,7 @@ set_max_num_events(const char *key __rte_unused,
+@@ -358,7 +358,7 @@ set_max_num_events(const char *key __rte_unused,
@@ -3411 +3396 @@
-@@ -378,7 +378,7 @@ set_num_dir_credits(const char *key __rte_unused,
+@@ -375,7 +375,7 @@ set_num_dir_credits(const char *key __rte_unused,
@@ -3420 +3405 @@
-@@ -388,7 +388,7 @@ set_num_dir_credits(const char *key __rte_unused,
+@@ -385,7 +385,7 @@ set_num_dir_credits(const char *key __rte_unused,
@@ -3429 +3414 @@
-@@ -405,7 +405,7 @@ set_dev_id(const char *key __rte_unused,
+@@ -402,7 +402,7 @@ set_dev_id(const char *key __rte_unused,
@@ -3438 +3423 @@
-@@ -425,7 +425,7 @@ set_poll_interval(const char *key __rte_unused,
+@@ -422,7 +422,7 @@ set_poll_interval(const char *key __rte_unused,
@@ -3447 +3432 @@
-@@ -445,7 +445,7 @@ set_port_cos(const char *key __rte_unused,
+@@ -442,7 +442,7 @@ set_port_cos(const char *key __rte_unused,
@@ -3456 +3441 @@
-@@ -458,18 +458,18 @@ set_port_cos(const char *key __rte_unused,
+@@ -455,18 +455,18 @@ set_port_cos(const char *key __rte_unused,
@@ -3478 +3463 @@
-@@ -487,7 +487,7 @@ set_cos_bw(const char *key __rte_unused,
+@@ -484,7 +484,7 @@ set_cos_bw(const char *key __rte_unused,
@@ -3487 +3472 @@
-@@ -495,11 +495,11 @@ set_cos_bw(const char *key __rte_unused,
+@@ -492,11 +492,11 @@ set_cos_bw(const char *key __rte_unused,
@@ -3501 +3486 @@
-@@ -515,7 +515,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
+@@ -512,7 +512,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
@@ -3510 +3495 @@
-@@ -524,7 +524,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
+@@ -521,7 +521,7 @@ set_sw_credit_quanta(const char *key __rte_unused,
@@ -3519 +3504 @@
-@@ -540,7 +540,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
+@@ -537,7 +537,7 @@ set_hw_credit_quanta(const char *key __rte_unused,
@@ -3528 +3513 @@
-@@ -560,7 +560,7 @@ set_default_depth_thresh(const char *key __rte_unused,
+@@ -557,7 +557,7 @@ set_default_depth_thresh(const char *key __rte_unused,
@@ -3537 +3522 @@
-@@ -579,7 +579,7 @@ set_vector_opts_enab(const char *key __rte_unused,
+@@ -576,7 +576,7 @@ set_vector_opts_enab(const char *key __rte_unused,
@@ -3546 +3531 @@
-@@ -599,7 +599,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
+@@ -596,7 +596,7 @@ set_default_ldb_port_allocation(const char *key __rte_unused,
@@ -3555 +3540 @@
-@@ -619,7 +619,7 @@ set_enable_cq_weight(const char *key __rte_unused,
+@@ -616,7 +616,7 @@ set_enable_cq_weight(const char *key __rte_unused,
@@ -3564 +3549 @@
-@@ -640,7 +640,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
+@@ -637,7 +637,7 @@ set_qid_depth_thresh(const char *key __rte_unused,
@@ -3573 +3558 @@
-@@ -657,18 +657,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
+@@ -654,18 +654,18 @@ set_qid_depth_thresh(const char *key __rte_unused,
@@ -3595 +3580 @@
-@@ -688,7 +688,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+@@ -685,7 +685,7 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
@@ -3604 +3589 @@
-@@ -705,18 +705,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
+@@ -702,18 +702,18 @@ set_qid_depth_thresh_v2_5(const char *key __rte_unused,
@@ -3626 +3611 @@
-@@ -738,7 +738,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
+@@ -735,7 +735,7 @@ dlb2_eventdev_info_get(struct rte_eventdev *dev,
@@ -3635 +3620 @@
-@@ -781,7 +781,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -778,7 +778,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3644 +3629 @@
-@@ -809,7 +809,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -806,7 +806,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3653 +3638 @@
-@@ -854,7 +854,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
+@@ -851,7 +851,7 @@ dlb2_hw_create_sched_domain(struct dlb2_eventdev *dlb2,
@@ -3662 +3647 @@
-@@ -930,27 +930,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
+@@ -927,27 +927,27 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
@@ -3694 +3679 @@
-@@ -1000,7 +1000,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
+@@ -997,7 +997,7 @@ dlb2_eventdev_configure(const struct rte_eventdev *dev)
@@ -3703 +3688 @@
-@@ -1068,7 +1068,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
+@@ -1065,7 +1065,7 @@ dlb2_get_sn_allocation(struct dlb2_eventdev *dlb2, int group)
@@ -3712 +3697 @@
-@@ -1088,7 +1088,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
+@@ -1085,7 +1085,7 @@ dlb2_set_sn_allocation(struct dlb2_eventdev *dlb2, int group, int num)
@@ -3721 +3706 @@
-@@ -1107,7 +1107,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
+@@ -1104,7 +1104,7 @@ dlb2_get_sn_occupancy(struct dlb2_eventdev *dlb2, int group)
@@ -3730 +3715 @@
-@@ -1161,7 +1161,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
+@@ -1158,7 +1158,7 @@ dlb2_program_sn_allocation(struct dlb2_eventdev *dlb2,
@@ -3739 +3724 @@
-@@ -1236,7 +1236,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
+@@ -1233,7 +1233,7 @@ dlb2_hw_create_ldb_queue(struct dlb2_eventdev *dlb2,
@@ -3748 +3733 @@
-@@ -1272,7 +1272,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
+@@ -1269,7 +1269,7 @@ dlb2_eventdev_ldb_queue_setup(struct rte_eventdev *dev,
@@ -3757 +3742 @@
-@@ -1380,7 +1380,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
+@@ -1377,7 +1377,7 @@ dlb2_init_consume_qe(struct dlb2_port *qm_port, char *mz_name)
@@ -3766 +3751 @@
-@@ -1412,7 +1412,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
+@@ -1409,7 +1409,7 @@ dlb2_init_int_arm_qe(struct dlb2_port *qm_port, char *mz_name)
@@ -3775 +3760 @@
-@@ -1440,20 +1440,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
+@@ -1437,20 +1437,20 @@ dlb2_init_qe_mem(struct dlb2_port *qm_port, char *mz_name)
@@ -3799 +3784 @@
-@@ -1536,14 +1536,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1533,14 +1533,14 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3816 +3801 @@
-@@ -1579,7 +1579,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1576,7 +1576,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3825 +3810 @@
-@@ -1602,7 +1602,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1599,7 +1599,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3834 +3819 @@
-@@ -1615,7 +1615,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
+@@ -1612,7 +1612,7 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2,
@@ -3843 +3828 @@
-@@ -1717,7 +1717,7 @@ error_exit:
+@@ -1714,7 +1714,7 @@ error_exit:
@@ -3852 +3837 @@
-@@ -1761,13 +1761,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1758,13 +1758,13 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3868 +3853 @@
-@@ -1802,7 +1802,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1799,7 +1799,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3877 +3862 @@
-@@ -1827,7 +1827,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
+@@ -1824,7 +1824,7 @@ dlb2_hw_create_dir_port(struct dlb2_eventdev *dlb2,
@@ -3886 +3871 @@
-@@ -1916,7 +1916,7 @@ error_exit:
+@@ -1913,7 +1913,7 @@ error_exit:
@@ -3895 +3880 @@
-@@ -1932,7 +1932,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1929,7 +1929,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3904 +3889 @@
-@@ -1950,7 +1950,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1947,7 +1947,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3913 +3898 @@
-@@ -1982,7 +1982,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -1979,7 +1979,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3922 +3907 @@
-@@ -2004,7 +2004,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -2001,7 +2001,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3931 +3916 @@
-@@ -2015,7 +2015,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
+@@ -2012,7 +2012,7 @@ dlb2_eventdev_port_setup(struct rte_eventdev *dev,
@@ -3940 +3925 @@
-@@ -2082,9 +2082,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
+@@ -2079,9 +2079,9 @@ dlb2_hw_map_ldb_qid_to_port(struct dlb2_hw_dev *handle,
@@ -3952 +3937 @@
-@@ -2117,7 +2117,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
+@@ -2114,7 +2114,7 @@ dlb2_event_queue_join_ldb(struct dlb2_eventdev *dlb2,
@@ -3961 +3946 @@
-@@ -2154,7 +2154,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
+@@ -2151,7 +2151,7 @@ dlb2_hw_create_dir_queue(struct dlb2_eventdev *dlb2,
@@ -3970 +3955 @@
-@@ -2172,7 +2172,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
+@@ -2169,7 +2169,7 @@ dlb2_eventdev_dir_queue_setup(struct dlb2_eventdev *dlb2,
@@ -3979 +3964 @@
-@@ -2202,7 +2202,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
+@@ -2199,7 +2199,7 @@ dlb2_do_port_link(struct rte_eventdev *dev,
@@ -3988 +3973 @@
-@@ -2240,7 +2240,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2237,7 +2237,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -3997 +3982 @@
-@@ -2250,7 +2250,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2247,7 +2247,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4006 +3991 @@
-@@ -2258,7 +2258,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2255,7 +2255,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4015 +4000 @@
-@@ -2267,7 +2267,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
+@@ -2264,7 +2264,7 @@ dlb2_validate_port_link(struct dlb2_eventdev_port *ev_port,
@@ -4024 +4009 @@
-@@ -2289,14 +2289,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
+@@ -2286,14 +2286,14 @@ dlb2_eventdev_port_link(struct rte_eventdev *dev, void *event_port,
@@ -4041 +4026 @@
-@@ -2381,7 +2381,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
+@@ -2378,7 +2378,7 @@ dlb2_hw_unmap_ldb_qid_from_port(struct dlb2_hw_dev *handle,
@@ -4050 +4035 @@
-@@ -2434,7 +2434,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2431,7 +2431,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4059 +4044 @@
-@@ -2459,7 +2459,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2456,7 +2456,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4068 +4053 @@
-@@ -2477,7 +2477,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
+@@ -2474,7 +2474,7 @@ dlb2_eventdev_port_unlink(struct rte_eventdev *dev, void *event_port,
@@ -4077 +4062 @@
-@@ -2504,7 +2504,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
+@@ -2501,7 +2501,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
@@ -4086 +4071 @@
-@@ -2516,7 +2516,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
+@@ -2513,7 +2513,7 @@ dlb2_eventdev_port_unlinks_in_progress(struct rte_eventdev *dev,
@@ -4095 +4080 @@
-@@ -2609,7 +2609,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
+@@ -2606,7 +2606,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
@@ -4104 +4089 @@
-@@ -2645,7 +2645,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
+@@ -2642,7 +2642,7 @@ dlb2_eventdev_start(struct rte_eventdev *dev)
@@ -4113 +4098 @@
-@@ -2890,7 +2890,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
+@@ -2887,7 +2887,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
@@ -4115 +4100 @@
- DLB2_LOG_LINE_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED");
+ DLB2_LOG_DBG("dlb2: put_qe: RTE_SCHED_TYPE_ORDERED\n");
@@ -4122 +4107 @@
-@@ -2909,7 +2909,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
+@@ -2906,7 +2906,7 @@ dlb2_event_enqueue_prep(struct dlb2_eventdev_port *ev_port,
@@ -4131 +4116 @@
-@@ -3156,7 +3156,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
+@@ -3153,7 +3153,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
@@ -4140 +4125 @@
-@@ -3213,7 +3213,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
+@@ -3210,7 +3210,7 @@ dlb2_event_release(struct dlb2_eventdev *dlb2,
@@ -4149 +4134 @@
-@@ -3367,7 +3367,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
+@@ -3364,7 +3364,7 @@ dlb2_process_dequeue_qes(struct dlb2_eventdev_port *ev_port,
@@ -4158 +4143 @@
-@@ -4283,7 +4283,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
+@@ -4278,7 +4278,7 @@ dlb2_get_ldb_queue_depth(struct dlb2_eventdev *dlb2,
@@ -4167 +4152 @@
-@@ -4303,7 +4303,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
+@@ -4298,7 +4298,7 @@ dlb2_get_dir_queue_depth(struct dlb2_eventdev *dlb2,
@@ -4176 +4161 @@
-@@ -4394,7 +4394,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4389,7 +4389,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4185 +4170 @@
-@@ -4402,7 +4402,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4397,7 +4397,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4194 +4179 @@
-@@ -4420,7 +4420,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4415,7 +4415,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4203 +4188 @@
-@@ -4435,7 +4435,7 @@ dlb2_drain(struct rte_eventdev *dev)
+@@ -4430,7 +4430,7 @@ dlb2_drain(struct rte_eventdev *dev)
@@ -4212 +4197 @@
-@@ -4454,7 +4454,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
+@@ -4449,7 +4449,7 @@ dlb2_eventdev_stop(struct rte_eventdev *dev)
@@ -4221 +4206 @@
-@@ -4610,7 +4610,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4605,7 +4605,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4230 +4215 @@
-@@ -4618,14 +4618,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4613,14 +4613,14 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4247 +4232 @@
-@@ -4648,7 +4648,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4643,7 +4643,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4256 +4241 @@
-@@ -4656,7 +4656,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4651,7 +4651,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4265 +4250 @@
-@@ -4664,7 +4664,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4659,7 +4659,7 @@ dlb2_primary_eventdev_probe(struct rte_eventdev *dev,
@@ -4274 +4259 @@
-@@ -4694,14 +4694,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
+@@ -4689,14 +4689,14 @@ dlb2_secondary_eventdev_probe(struct rte_eventdev *dev,
@@ -4292 +4277 @@
-index 22094f30bb..c037cfe786 100644
+index ff15271dda..28de48e24e 100644
@@ -4561 +4546 @@
-index b3576e5f42..ed4e6e424c 100644
+index 3d15250e11..019e90f7e7 100644
@@ -4653 +4638 @@
-index 1273455673..f0b2c7de99 100644
+index dd4e64395f..4658eaf3a2 100644
@@ -4674 +4659 @@
-@@ -851,7 +851,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
+@@ -849,7 +849,7 @@ dpaa2_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev,
@@ -4683 +4668 @@
-@@ -885,7 +885,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
+@@ -883,7 +883,7 @@ dpaa2_eventdev_crypto_queue_add(const struct rte_eventdev *dev,
@@ -4692 +4677 @@
-@@ -905,7 +905,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
+@@ -903,7 +903,7 @@ dpaa2_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev,
@@ -4701 +4686 @@
-@@ -928,7 +928,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
+@@ -926,7 +926,7 @@ dpaa2_eventdev_crypto_queue_del(const struct rte_eventdev *dev,
@@ -4710 +4695 @@
-@@ -1161,7 +1161,7 @@ dpaa2_eventdev_destroy(const char *name)
+@@ -1159,7 +1159,7 @@ dpaa2_eventdev_destroy(const char *name)
@@ -4733 +4718 @@
-index b34a5fcacd..25853166bf 100644
+index 0cccaf7e97..fe0c0ede6f 100644
@@ -4844 +4829 @@
-@@ -662,7 +662,7 @@ opdl_probe(struct rte_vdev_device *vdev)
+@@ -659,7 +659,7 @@ opdl_probe(struct rte_vdev_device *vdev)
@@ -4853 +4838 @@
-@@ -709,7 +709,7 @@ opdl_probe(struct rte_vdev_device *vdev)
+@@ -706,7 +706,7 @@ opdl_probe(struct rte_vdev_device *vdev)
@@ -4862 +4847 @@
-@@ -753,7 +753,7 @@ opdl_remove(struct rte_vdev_device *vdev)
+@@ -750,7 +750,7 @@ opdl_remove(struct rte_vdev_device *vdev)
@@ -5367 +5352 @@
-index 19a52afc7d..7913bc547e 100644
+index 2096496917..babe77a20f 100644
@@ -5415 +5400 @@
-@@ -772,7 +772,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -769,7 +769,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5424 +5409 @@
-@@ -780,7 +780,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -777,7 +777,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5433 +5418 @@
-@@ -788,7 +788,7 @@ sw_start(struct rte_eventdev *dev)
+@@ -785,7 +785,7 @@ sw_start(struct rte_eventdev *dev)
@@ -5442 +5427 @@
-@@ -1000,7 +1000,7 @@ sw_probe(struct rte_vdev_device *vdev)
+@@ -997,7 +997,7 @@ sw_probe(struct rte_vdev_device *vdev)
@@ -5451 +5436 @@
-@@ -1070,7 +1070,7 @@ sw_probe(struct rte_vdev_device *vdev)
+@@ -1067,7 +1067,7 @@ sw_probe(struct rte_vdev_device *vdev)
@@ -5460 +5445 @@
-@@ -1134,7 +1134,7 @@ sw_remove(struct rte_vdev_device *vdev)
+@@ -1131,7 +1131,7 @@ sw_remove(struct rte_vdev_device *vdev)
@@ -5492 +5477 @@
-index 42e17d984c..886fb7fbb0 100644
+index 84371d5d1a..b0c6d153e4 100644
@@ -5495 +5480 @@
-@@ -69,7 +69,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
+@@ -67,7 +67,7 @@ rte_hw_mbuf_create_pool(struct rte_mempool *mp)
@@ -5504 +5489 @@
-@@ -213,7 +213,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
+@@ -198,7 +198,7 @@ rte_dpaa2_mbuf_release(struct rte_mempool *pool __rte_unused,
@@ -5513 +5498 @@
-@@ -357,7 +357,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
+@@ -342,7 +342,7 @@ rte_dpaa2_mbuf_alloc_bulk(struct rte_mempool *pool,
@@ -5522 +5507 @@
-@@ -472,7 +472,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
+@@ -457,7 +457,7 @@ dpaa2_populate(struct rte_mempool *mp, unsigned int max_objs,
@@ -5954 +5939 @@
-index 17b7b5c543..5448a5f3d7 100644
+index 6ce87f83f4..da45ebf45f 100644
@@ -5957 +5942 @@
-@@ -1353,7 +1353,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
+@@ -1352,7 +1352,7 @@ axgbe_priority_flow_ctrl_set(struct rte_eth_dev *dev,
@@ -6000 +5985 @@
-index cdedf67c6f..209cf5a80c 100644
+index 06c21ebe6d..3cca8a07f3 100644
@@ -6087,39 +6071,0 @@
-diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
-index 55ed54bb0f..ad6bc1ec21 100644
---- a/drivers/net/cnxk/cn10k_ethdev.c
-+++ b/drivers/net/cnxk/cn10k_ethdev.c
-@@ -707,7 +707,7 @@ cn10k_rx_descriptor_dump(const struct rte_eth_dev *eth_dev, uint16_t qid,
- available_pkts = cn10k_nix_rx_avail_get(rxq);
-
- if ((offset + num - 1) >= available_pkts) {
-- plt_err("Invalid BD num=%u\n", num);
-+ plt_err("Invalid BD num=%u", num);
- return -EINVAL;
- }
-
-diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
-index ea92b1dcb6..84c88655f8 100644
---- a/drivers/net/cnxk/cn9k_ethdev.c
-+++ b/drivers/net/cnxk/cn9k_ethdev.c
-@@ -708,7 +708,7 @@ cn9k_rx_descriptor_dump(const struct rte_eth_dev *eth_dev, uint16_t qid,
- available_pkts = cn9k_nix_rx_avail_get(rxq);
-
- if ((offset + num - 1) >= available_pkts) {
-- plt_err("Invalid BD num=%u\n", num);
-+ plt_err("Invalid BD num=%u", num);
- return -EINVAL;
- }
-
-diff --git a/drivers/net/cnxk/cnxk_eswitch_devargs.c b/drivers/net/cnxk/cnxk_eswitch_devargs.c
-index 8167ce673a..655813c71a 100644
---- a/drivers/net/cnxk/cnxk_eswitch_devargs.c
-+++ b/drivers/net/cnxk/cnxk_eswitch_devargs.c
-@@ -26,7 +26,7 @@ populate_repr_hw_info(struct cnxk_eswitch_dev *eswitch_dev, struct rte_eth_devar
-
- if (eth_da->type != RTE_ETH_REPRESENTOR_VF && eth_da->type != RTE_ETH_REPRESENTOR_PF &&
- eth_da->type != RTE_ETH_REPRESENTOR_SF) {
-- plt_err("unsupported representor type %d\n", eth_da->type);
-+ plt_err("unsupported representor type %d", eth_da->type);
- return -ENOTSUP;
- }
-
@@ -6127 +6073 @@
-index 38746c81c5..33bac55704 100644
+index c841b31051..60baf806ab 100644
@@ -6130 +6076 @@
-@@ -589,7 +589,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
+@@ -582,7 +582,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
@@ -6139 +6085 @@
-@@ -617,7 +617,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
+@@ -610,7 +610,7 @@ cnxk_nix_process_rx_conf(const struct rte_eth_rxconf *rx_conf,
@@ -6178 +6124 @@
-index b1093dd584..5b0948e07a 100644
+index c8f4848f92..89e00f8fc7 100644
@@ -6181 +6127 @@
-@@ -532,7 +532,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
+@@ -528,7 +528,7 @@ cnxk_nix_sq_flush(struct rte_eth_dev *eth_dev)
@@ -6190,66 +6135,0 @@
-diff --git a/drivers/net/cnxk/cnxk_rep.c b/drivers/net/cnxk/cnxk_rep.c
-index ca0637bde5..652d419ad8 100644
---- a/drivers/net/cnxk/cnxk_rep.c
-+++ b/drivers/net/cnxk/cnxk_rep.c
-@@ -270,7 +270,7 @@ cnxk_representee_mtu_msg_process(struct cnxk_eswitch_dev *eswitch_dev, uint16_t
-
- rep_dev = cnxk_rep_pmd_priv(rep_eth_dev);
- if (rep_dev->rep_id == rep_id) {
-- plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d\n", mtu, hw_func,
-+ plt_rep_dbg("Setting MTU as %d for hw_func %x rep_id %d", mtu, hw_func,
- rep_id);
- rep_dev->repte_mtu = mtu;
- break;
-@@ -423,7 +423,7 @@ cnxk_rep_parent_setup(struct cnxk_eswitch_dev *eswitch_dev)
- plt_err("Failed to alloc switch domain: %d", rc);
- goto fail;
- }
-- plt_rep_dbg("Allocated switch domain id %d for pf %d\n", switch_domain_id, pf);
-+ plt_rep_dbg("Allocated switch domain id %d for pf %d", switch_domain_id, pf);
- eswitch_dev->sw_dom[j].switch_domain_id = switch_domain_id;
- eswitch_dev->sw_dom[j].pf = pf;
- prev_pf = pf;
-@@ -549,7 +549,7 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi
- int i, j, rc;
-
- if (eswitch_dev->repr_cnt.nb_repr_created > RTE_MAX_ETHPORTS) {
-- plt_err("nb_representor_ports %d > %d MAX ETHPORTS\n",
-+ plt_err("nb_representor_ports %d > %d MAX ETHPORTS",
- eswitch_dev->repr_cnt.nb_repr_created, RTE_MAX_ETHPORTS);
- rc = -EINVAL;
- goto fail;
-@@ -604,7 +604,7 @@ cnxk_rep_dev_probe(struct rte_pci_device *pci_dev, struct cnxk_eswitch_dev *eswi
- name, cnxk_representee_msg_thread_main,
- eswitch_dev);
- if (rc != 0) {
-- plt_err("Failed to create thread for VF mbox handling\n");
-+ plt_err("Failed to create thread for VF mbox handling");
- goto thread_fail;
- }
- }
-diff --git a/drivers/net/cnxk/cnxk_rep.h b/drivers/net/cnxk/cnxk_rep.h
-index ad89649702..aaae2d4e8f 100644
---- a/drivers/net/cnxk/cnxk_rep.h
-+++ b/drivers/net/cnxk/cnxk_rep.h
-@@ -93,7 +93,7 @@ cnxk_rep_pmd_priv(const struct rte_eth_dev *eth_dev)
- static __rte_always_inline void
- cnxk_rep_pool_buffer_stats(struct rte_mempool *pool)
- {
-- plt_rep_dbg(" pool %s size %d buffer count in use %d available %d\n", pool->name,
-+ plt_rep_dbg(" pool %s size %d buffer count in use %d available %d", pool->name,
- pool->size, rte_mempool_in_use_count(pool), rte_mempool_avail_count(pool));
- }
-
-diff --git a/drivers/net/cpfl/cpfl_ethdev.c b/drivers/net/cpfl/cpfl_ethdev.c
-index 222e178949..6f6707a0bd 100644
---- a/drivers/net/cpfl/cpfl_ethdev.c
-+++ b/drivers/net/cpfl/cpfl_ethdev.c
-@@ -2284,7 +2284,7 @@ get_running_host_id(void)
- uint8_t host_id = CPFL_INVALID_HOST_ID;
-
- if (uname(&unamedata) != 0)
-- PMD_INIT_LOG(ERR, "Cannot fetch node_name for host\n");
-+ PMD_INIT_LOG(ERR, "Cannot fetch node_name for host");
- else if (strstr(unamedata.nodename, "ipu-imc"))
- PMD_INIT_LOG(ERR, "CPFL PMD cannot be running on IMC.");
- else if (strstr(unamedata.nodename, "ipu-acc"))
@@ -6310 +6190 @@
-index 449bbda7ca..88374ea905 100644
+index 8e610b6bba..c5b1f161fd 100644
@@ -6331 +6211 @@
-@@ -1934,7 +1934,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
+@@ -1933,7 +1933,7 @@ dpaa2_dev_link_update(struct rte_eth_dev *dev,
@@ -6340 +6220 @@
-@@ -2308,7 +2308,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
+@@ -2307,7 +2307,7 @@ int dpaa2_eth_eventq_attach(const struct rte_eth_dev *dev,
@@ -6349 +6229 @@
-@@ -2424,7 +2424,7 @@ rte_pmd_dpaa2_thread_init(void)
+@@ -2423,7 +2423,7 @@ rte_pmd_dpaa2_thread_init(void)
@@ -6358 +6238 @@
-@@ -2839,7 +2839,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
+@@ -2838,7 +2838,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
@@ -6367 +6247 @@
-@@ -2847,7 +2847,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
+@@ -2846,7 +2846,7 @@ dpaa2_dev_init(struct rte_eth_dev *eth_dev)
@@ -6376 +6256 @@
-@@ -2930,7 +2930,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
+@@ -2929,7 +2929,7 @@ rte_dpaa2_probe(struct rte_dpaa2_driver *dpaa2_drv,
@@ -6386 +6266 @@
-index 6c7bac4d48..62e350d736 100644
+index eec7e60650..e590f6f748 100644
@@ -6398 +6278 @@
-@@ -3602,7 +3602,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3601,7 +3601,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6407 +6287 @@
-@@ -3720,14 +3720,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3718,14 +3718,14 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6424 +6304 @@
-@@ -3749,7 +3749,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3747,7 +3747,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6433 +6313 @@
-@@ -3774,7 +3774,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
+@@ -3772,7 +3772,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
@@ -6442 +6322 @@
-@@ -3843,20 +3843,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
+@@ -3841,20 +3841,20 @@ dpaa2_dev_verify_attr(struct dpni_attr *dpni_attr,
@@ -6467 +6347 @@
-@@ -3935,7 +3935,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3933,7 +3933,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6476 +6356 @@
-@@ -3947,7 +3947,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3945,7 +3945,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6485 +6365 @@
-@@ -3957,7 +3957,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3955,7 +3955,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6494 +6374 @@
-@@ -3967,7 +3967,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
+@@ -3965,7 +3965,7 @@ int dpaa2_flow_validate(struct rte_eth_dev *dev,
@@ -6503 +6383 @@
-@@ -4014,13 +4014,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
+@@ -4012,13 +4012,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
@@ -6519 +6399 @@
-@@ -4031,13 +4031,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
+@@ -4029,13 +4029,13 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
@@ -6656 +6536 @@
-index 36a14526a5..59f7a172c6 100644
+index 63463c4fbf..eb649fb063 100644
@@ -6696 +6576 @@
-index cb854964b4..97d65e7181 100644
+index 8fe5bfa013..3c0f282ec3 100644
@@ -6774 +6654 @@
-index 095be27b08..1e0a483d4a 100644
+index 8858f975f8..d64a1aedd3 100644
@@ -6777 +6657 @@
-@@ -5116,7 +5116,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
+@@ -5053,7 +5053,7 @@ eth_igb_get_module_info(struct rte_eth_dev *dev,
@@ -6787 +6667 @@
-index d02ee206f1..ffbecc407c 100644
+index c9352f0746..d8c30ef150 100644
@@ -6790 +6670 @@
-@@ -151,7 +151,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
+@@ -150,7 +150,7 @@ print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)
@@ -6799 +6679 @@
-@@ -198,7 +198,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
+@@ -197,7 +197,7 @@ enetc_hardware_init(struct enetc_eth_hw *hw)
@@ -6880 +6760 @@
-index cad8db2f6f..c1dba0c0fd 100644
+index b04b6c9aa1..1121874346 100644
@@ -6883 +6763 @@
-@@ -672,7 +672,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
+@@ -670,7 +670,7 @@ static void debug_log_add_del_addr(struct rte_ether_addr *addr, bool add)
@@ -6892 +6772 @@
-@@ -695,7 +695,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -693,7 +693,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6901 +6781 @@
-@@ -703,7 +703,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -701,7 +701,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6910 +6790 @@
-@@ -716,7 +716,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
+@@ -714,7 +714,7 @@ static int enicpmd_set_mc_addr_list(struct rte_eth_dev *eth_dev,
@@ -6919 +6799 @@
-@@ -982,7 +982,7 @@ static int udp_tunnel_common_check(struct enic *enic,
+@@ -980,7 +980,7 @@ static int udp_tunnel_common_check(struct enic *enic,
@@ -6928 +6808 @@
-@@ -995,10 +995,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
+@@ -993,10 +993,10 @@ static int update_tunnel_port(struct enic *enic, uint16_t port, bool vxlan)
@@ -6941 +6821 @@
-@@ -1029,7 +1029,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
+@@ -1027,7 +1027,7 @@ static int enicpmd_dev_udp_tunnel_port_add(struct rte_eth_dev *eth_dev,
@@ -6950 +6830 @@
-@@ -1061,7 +1061,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
+@@ -1059,7 +1059,7 @@ static int enicpmd_dev_udp_tunnel_port_del(struct rte_eth_dev *eth_dev,
@@ -6959 +6839 @@
-@@ -1325,7 +1325,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -1323,7 +1323,7 @@ static int eth_enic_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -7188 +7068 @@
-index 09c6bff026..bcb983e4a0 100644
+index 343bd13d67..438c0c5441 100644
@@ -7200,35 +7079,0 @@
-@@ -736,7 +736,7 @@ gve_set_max_desc_cnt(struct gve_priv *priv,
- {
- if (priv->queue_format == GVE_DQO_RDA_FORMAT) {
- PMD_DRV_LOG(DEBUG, "Overriding max ring size from device for DQ "
-- "queue format to 4096.\n");
-+ "queue format to 4096.");
- priv->max_rx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO;
- priv->max_tx_desc_cnt = GVE_MAX_QUEUE_SIZE_DQO;
- return;
-diff --git a/drivers/net/gve/gve_rx.c b/drivers/net/gve/gve_rx.c
-index 89b6ef384a..1f5fa3f1da 100644
---- a/drivers/net/gve/gve_rx.c
-+++ b/drivers/net/gve/gve_rx.c
-@@ -306,7 +306,7 @@ gve_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id,
-
- /* Ring size is required to be a power of two. */
- if (!rte_is_power_of_2(nb_desc)) {
-- PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.\n",
-+ PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.",
- nb_desc);
- return -EINVAL;
- }
-diff --git a/drivers/net/gve/gve_tx.c b/drivers/net/gve/gve_tx.c
-index 658bfb972b..015ea9646b 100644
---- a/drivers/net/gve/gve_tx.c
-+++ b/drivers/net/gve/gve_tx.c
-@@ -561,7 +561,7 @@ gve_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_id, uint16_t nb_desc,
-
- /* Ring size is required to be a power of two. */
- if (!rte_is_power_of_2(nb_desc)) {
-- PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.\n",
-+ PMD_DRV_LOG(ERR, "Invalid ring size %u. GVE ring size must be a power of 2.",
- nb_desc);
- return -EINVAL;
- }
@@ -7398 +7243 @@
-index 26fa2eb951..f7162ee7bc 100644
+index 916bf30dcb..0b768ef140 100644
@@ -7491 +7336 @@
-index f847bf82bc..42f51c7621 100644
+index ffc1f6d874..2b043cd693 100644
@@ -7494 +7339 @@
-@@ -668,7 +668,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -653,7 +653,7 @@ eth_i40e_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -7503 +7348 @@
-@@ -1583,10 +1583,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
+@@ -1480,10 +1480,7 @@ eth_i40e_dev_init(struct rte_eth_dev *dev, void *init_params __rte_unused)
@@ -7515 +7360 @@
-@@ -2326,7 +2323,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2222,7 +2219,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7524 +7369 @@
-@@ -2336,7 +2333,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2232,7 +2229,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7533 +7378 @@
-@@ -2361,7 +2358,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
+@@ -2257,7 +2254,7 @@ i40e_phy_conf_link(struct i40e_hw *hw,
@@ -7542 +7387 @@
-@@ -6959,7 +6956,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6814,7 +6811,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7551 +7396 @@
-@@ -6975,7 +6972,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6830,7 +6827,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7560 +7405 @@
-@@ -6985,13 +6982,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6840,13 +6837,13 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7576 +7421 @@
-@@ -7004,7 +7001,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6859,7 +6856,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7585 +7430 @@
-@@ -7014,7 +7011,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
+@@ -6869,7 +6866,7 @@ i40e_handle_mdd_event(struct rte_eth_dev *dev)
@@ -7594 +7439 @@
-@@ -11449,7 +11446,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11304,7 +11301,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7603 +7448 @@
-@@ -11460,7 +11457,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11315,7 +11312,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7612 +7457 @@
-@@ -11490,7 +11487,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11345,7 +11342,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7621 +7466 @@
-@@ -11526,7 +11523,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
+@@ -11381,7 +11378,7 @@ static int i40e_get_module_info(struct rte_eth_dev *dev,
@@ -7630 +7475 @@
-@@ -11828,7 +11825,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
+@@ -11683,7 +11680,7 @@ i40e_update_customized_pctype(struct rte_eth_dev *dev, uint8_t *pkg,
@@ -7639 +7484 @@
-@@ -11972,7 +11969,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
+@@ -11827,7 +11824,7 @@ i40e_update_customized_ptype(struct rte_eth_dev *dev, uint8_t *pkg,
@@ -7648,63 +7492,0 @@
-@@ -12317,7 +12314,7 @@ i40e_fec_get_capability(struct rte_eth_dev *dev,
- if (hw->mac.type == I40E_MAC_X722 &&
- !(hw->flags & I40E_HW_FLAG_X722_FEC_REQUEST_CAPABLE)) {
- PMD_DRV_LOG(ERR, "Setting FEC encoding not supported by"
-- " firmware. Please update the NVM image.\n");
-+ " firmware. Please update the NVM image.");
- return -ENOTSUP;
- }
-
-@@ -12359,7 +12356,7 @@ i40e_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- /* Get link info */
- ret = i40e_aq_get_link_info(hw, enable_lse, &link_status, NULL);
- if (ret != I40E_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get link information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get link information: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -12369,7 +12366,7 @@ i40e_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- ret = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- NULL);
- if (ret) {
-- PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -12435,7 +12432,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- if (hw->mac.type == I40E_MAC_X722 &&
- !(hw->flags & I40E_HW_FLAG_X722_FEC_REQUEST_CAPABLE)) {
- PMD_DRV_LOG(ERR, "Setting FEC encoding not supported by"
-- " firmware. Please update the NVM image.\n");
-+ " firmware. Please update the NVM image.");
- return -ENOTSUP;
- }
-
-@@ -12507,7 +12504,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- status = i40e_aq_get_phy_capabilities(hw, false, false, &abilities,
- NULL);
- if (status) {
-- PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get PHY capabilities: %d",
- status);
- return -ENOTSUP;
- }
-@@ -12524,7 +12521,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
- config.fec_config = req_fec & I40E_AQ_PHY_FEC_CONFIG_MASK;
- status = i40e_aq_set_phy_config(hw, &config, NULL);
- if (status) {
-- PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d",
- status);
- return -ENOTSUP;
- }
-@@ -12532,7 +12529,7 @@ i40e_fec_set(struct rte_eth_dev *dev, uint32_t fec_capa)
-
- status = i40e_update_link_info(hw);
- if (status) {
-- PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to set PHY capabilities: %d",
- status);
- return -ENOTSUP;
- }
@@ -7746 +7528 @@
-index ff977a3681..839c8a5442 100644
+index 5e693cb1ea..e65e8829d9 100644
@@ -7785,73 +7567 @@
-@@ -1564,7 +1564,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
-
- if ((adapter->mbuf_check & I40E_MBUF_CHECK_F_TX_MBUF) &&
- (rte_mbuf_check(mb, 1, &reason) != 0)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+ PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- pkt_error = true;
- break;
- }
-@@ -1573,7 +1573,7 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- (mb->data_len > mb->pkt_len ||
- mb->data_len < I40E_TX_MIN_PKT_LEN ||
- mb->data_len > adapter->max_pkt_len)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)",
- mb->data_len, I40E_TX_MIN_PKT_LEN, adapter->max_pkt_len);
- pkt_error = true;
- break;
-@@ -1586,13 +1586,13 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- * the limites.
- */
- if (mb->nb_segs > I40E_TX_MAX_MTU_SEG) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, I40E_TX_MAX_MTU_SEG);
- pkt_error = true;
- break;
- }
- if (mb->pkt_len > I40E_FRAME_SIZE_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, I40E_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -1606,18 +1606,18 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
- /**
- * MSS outside the range are considered malicious
- */
-- PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- mb->tso_segsz, I40E_MIN_TSO_MSS, I40E_MAX_TSO_MSS);
- pkt_error = true;
- break;
- }
- if (mb->nb_segs > ((struct i40e_tx_queue *)tx_queue)->nb_tx_desc) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- pkt_error = true;
- break;
- }
- if (mb->pkt_len > I40E_TSO_FRAME_SIZE_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, I40E_TSO_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -1627,13 +1627,13 @@ i40e_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts
-
- if (adapter->mbuf_check & I40E_MBUF_CHECK_F_TX_OFFLOAD) {
- if (ol_flags & I40E_TX_OFFLOAD_NOTSUP_MASK) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- pkt_error = true;
- break;
- }
-
- if (!rte_validate_tx_offload(mb)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- pkt_error = true;
- break;
- }
-@@ -3573,7 +3573,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
+@@ -3467,7 +3467,7 @@ i40e_set_tx_function_flag(struct rte_eth_dev *dev, struct i40e_tx_queue *txq)
@@ -7867 +7577 @@
-index 44276dcf38..c56fcfadf0 100644
+index 54bff05675..9087909ec2 100644
@@ -7870 +7580 @@
-@@ -2383,7 +2383,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
+@@ -2301,7 +2301,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
@@ -7879 +7589 @@
-@@ -2418,7 +2418,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
+@@ -2336,7 +2336,7 @@ static int iavf_parse_devargs(struct rte_eth_dev *dev)
@@ -7888 +7598 @@
-@@ -3059,12 +3059,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
+@@ -2972,12 +2972,12 @@ iavf_dev_reset(struct rte_eth_dev *dev)
@@ -7903 +7613 @@
-@@ -3109,7 +3109,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
+@@ -3022,7 +3022,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
@@ -7912 +7622 @@
-@@ -3136,7 +3136,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
+@@ -3049,7 +3049,7 @@ iavf_handle_hw_reset(struct rte_eth_dev *dev)
@@ -7922 +7632 @@
-index ecc31430d1..4850b9e381 100644
+index f19aa14646..ec0dffa30e 100644
@@ -7925 +7635 @@
-@@ -3036,7 +3036,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
+@@ -3027,7 +3027,7 @@ iavf_check_vlan_up2tc(struct iavf_tx_queue *txq, struct rte_mbuf *m)
@@ -7934,58 +7643,0 @@
-@@ -3830,7 +3830,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
-
- if ((adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_MBUF) &&
- (rte_mbuf_check(mb, 1, &reason) != 0)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+ PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- pkt_error = true;
- break;
- }
-@@ -3838,7 +3838,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- if ((adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_SIZE) &&
- (mb->data_len < IAVF_TX_MIN_PKT_LEN ||
- mb->data_len > adapter->vf.max_pkt_len)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %u)",
- mb->data_len, IAVF_TX_MIN_PKT_LEN, adapter->vf.max_pkt_len);
- pkt_error = true;
- break;
-@@ -3848,7 +3848,7 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- /* Check condition for nb_segs > IAVF_TX_MAX_MTU_SEG. */
- if (!(ol_flags & (RTE_MBUF_F_TX_TCP_SEG | RTE_MBUF_F_TX_UDP_SEG))) {
- if (mb->nb_segs > IAVF_TX_MAX_MTU_SEG) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, IAVF_TX_MAX_MTU_SEG);
- pkt_error = true;
- break;
-@@ -3856,12 +3856,12 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
- } else if ((mb->tso_segsz < IAVF_MIN_TSO_MSS) ||
- (mb->tso_segsz > IAVF_MAX_TSO_MSS)) {
- /* MSS outside the range are considered malicious */
-- PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- mb->tso_segsz, IAVF_MIN_TSO_MSS, IAVF_MAX_TSO_MSS);
- pkt_error = true;
- break;
- } else if (mb->nb_segs > txq->nb_tx_desc) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- pkt_error = true;
- break;
- }
-@@ -3869,13 +3869,13 @@ iavf_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts,
-
- if (adapter->devargs.mbuf_check & IAVF_MBUF_CHECK_F_TX_OFFLOAD) {
- if (ol_flags & IAVF_TX_OFFLOAD_NOTSUP_MASK) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- pkt_error = true;
- break;
- }
-
- if (!rte_validate_tx_offload(mb)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- pkt_error = true;
- break;
- }
@@ -7993 +7645 @@
-index 8f3a385ca5..91f4943a11 100644
+index 5d845bba31..a025b0ea7f 100644
@@ -8005 +7657 @@
-@@ -2088,7 +2088,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
+@@ -2087,7 +2087,7 @@ eth_ice_dcf_pci_probe(__rte_unused struct rte_pci_driver *pci_drv,
@@ -8082 +7734 @@
-index 304f959b7e..7b1bd163a2 100644
+index c1d2b91ad7..86f43050a5 100644
@@ -8085 +7737 @@
-@@ -1907,7 +1907,7 @@ no_dsn:
+@@ -1867,7 +1867,7 @@ no_dsn:
@@ -8094 +7746 @@
-@@ -1916,7 +1916,7 @@ load_fw:
+@@ -1876,7 +1876,7 @@ load_fw:
@@ -8103 +7755 @@
-@@ -2166,7 +2166,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
+@@ -2074,7 +2074,7 @@ static int ice_parse_devargs(struct rte_eth_dev *dev)
@@ -8112 +7764 @@
-@@ -2405,20 +2405,20 @@ ice_dev_init(struct rte_eth_dev *dev)
+@@ -2340,20 +2340,20 @@ ice_dev_init(struct rte_eth_dev *dev)
@@ -8136 +7788 @@
-@@ -2470,14 +2470,14 @@ ice_dev_init(struct rte_eth_dev *dev)
+@@ -2405,14 +2405,14 @@ ice_dev_init(struct rte_eth_dev *dev)
@@ -8147 +7799 @@
- ret = ice_lldp_fltr_add_remove(hw, vsi->vsi_id, true);
+ ret = ice_vsi_config_sw_lldp(vsi, true);
@@ -8154,3 +7806,3 @@
-@@ -2502,7 +2502,7 @@ ice_dev_init(struct rte_eth_dev *dev)
- if (hw->phy_model == ICE_PHY_E822) {
- ret = ice_start_phy_timer_e822(hw, hw->pf_id);
+@@ -2439,7 +2439,7 @@ ice_dev_init(struct rte_eth_dev *dev)
+ if (hw->phy_cfg == ICE_PHY_E822) {
+ ret = ice_start_phy_timer_e822(hw, hw->pf_id, true);
@@ -8163 +7815 @@
-@@ -2748,7 +2748,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
+@@ -2686,7 +2686,7 @@ ice_hash_moveout(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
@@ -8172 +7824 @@
-@@ -2769,7 +2769,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
+@@ -2707,7 +2707,7 @@ ice_hash_moveback(struct ice_pf *pf, struct ice_rss_hash_cfg *cfg)
@@ -8181 +7833 @@
-@@ -3164,7 +3164,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
+@@ -3102,7 +3102,7 @@ ice_rem_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
@@ -8190 +7842 @@
-@@ -3180,15 +3180,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
+@@ -3118,15 +3118,15 @@ ice_add_rss_cfg_wrap(struct ice_pf *pf, uint16_t vsi_id,
@@ -8209 +7861 @@
-@@ -3378,7 +3378,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
+@@ -3316,7 +3316,7 @@ ice_get_default_rss_key(uint8_t *rss_key, uint32_t rss_key_size)
@@ -8218 +7870 @@
-@@ -3413,12 +3413,12 @@ static int ice_init_rss(struct ice_pf *pf)
+@@ -3351,12 +3351,12 @@ static int ice_init_rss(struct ice_pf *pf)
@@ -8233 +7885 @@
-@@ -4277,7 +4277,7 @@ ice_phy_conf_link(struct ice_hw *hw,
+@@ -4202,7 +4202,7 @@ ice_phy_conf_link(struct ice_hw *hw,
@@ -8242 +7894 @@
-@@ -5734,7 +5734,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
+@@ -5657,7 +5657,7 @@ ice_get_module_info(struct rte_eth_dev *dev,
@@ -8251 +7903 @@
-@@ -5805,7 +5805,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
+@@ -5728,7 +5728,7 @@ ice_get_module_eeprom(struct rte_eth_dev *dev,
@@ -8260,27 +7911,0 @@
-@@ -6773,7 +6773,7 @@ ice_fec_get_capability(struct rte_eth_dev *dev, struct rte_eth_fec_capa *speed_f
- ret = ice_aq_get_phy_caps(hw->port_info, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
- &pcaps, NULL);
- if (ret != ICE_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get capability information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get capability information: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -6805,7 +6805,7 @@ ice_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
-
- ret = ice_get_link_info_safe(pf, enable_lse, &link_status);
- if (ret != ICE_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get link information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get link information: %d",
- ret);
- return -ENOTSUP;
- }
-@@ -6815,7 +6815,7 @@ ice_fec_get(struct rte_eth_dev *dev, uint32_t *fec_capa)
- ret = ice_aq_get_phy_caps(pi, false, ICE_AQC_REPORT_TOPO_CAP_MEDIA,
- &pcaps, NULL);
- if (ret != ICE_SUCCESS) {
-- PMD_DRV_LOG(ERR, "Failed to get capability information: %d\n",
-+ PMD_DRV_LOG(ERR, "Failed to get capability information: %d",
- ret);
- return -ENOTSUP;
- }
@@ -8288 +7913 @@
-index edd8cc8f1a..741107f939 100644
+index 0b7920ad44..dd9130ace3 100644
@@ -8301 +7926 @@
-index b720e0f755..00d65bc637 100644
+index d8c46347d2..dad117679d 100644
@@ -8304 +7929 @@
-@@ -1245,13 +1245,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
+@@ -1242,13 +1242,13 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
@@ -8320 +7945 @@
-@@ -1259,7 +1259,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
+@@ -1256,7 +1256,7 @@ ice_hash_add_raw_cfg(struct ice_adapter *ad,
@@ -8329 +7954 @@
-@@ -1381,7 +1381,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
+@@ -1378,7 +1378,7 @@ ice_hash_rem_raw_cfg(struct ice_adapter *ad,
@@ -8339 +7964 @@
-index f270498ed1..acd7539b5e 100644
+index dea6a5b535..7da314217a 100644
@@ -8342 +7967 @@
-@@ -2839,7 +2839,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
+@@ -2822,7 +2822,7 @@ ice_xmit_cleanup(struct ice_tx_queue *txq)
@@ -8351,66 +7975,0 @@
-@@ -3714,7 +3714,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-
- if ((adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_MBUF) &&
- (rte_mbuf_check(mb, 1, &reason) != 0)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason);
-+ PMD_TX_LOG(ERR, "INVALID mbuf: %s", reason);
- pkt_error = true;
- break;
- }
-@@ -3723,7 +3723,7 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- (mb->data_len > mb->pkt_len ||
- mb->data_len < ICE_TX_MIN_PKT_LEN ||
- mb->data_len > ICE_FRAME_SIZE_MAX)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %d)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out of range, reasonable range (%d - %d)",
- mb->data_len, ICE_TX_MIN_PKT_LEN, ICE_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -3736,13 +3736,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- * the limites.
- */
- if (mb->nb_segs > ICE_TX_MTU_SEG_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, ICE_TX_MTU_SEG_MAX);
- pkt_error = true;
- break;
- }
- if (mb->pkt_len > ICE_FRAME_SIZE_MAX) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds HW limit, maximum allowed value is %d",
- mb->nb_segs, ICE_FRAME_SIZE_MAX);
- pkt_error = true;
- break;
-@@ -3756,13 +3756,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
- /**
- * MSS outside the range are considered malicious
- */
-- PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)\n",
-+ PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out of range, reasonable range (%d - %u)",
- mb->tso_segsz, ICE_MIN_TSO_MSS, ICE_MAX_TSO_MSS);
- pkt_error = true;
- break;
- }
- if (mb->nb_segs > ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out of ring length");
- pkt_error = true;
- break;
- }
-@@ -3771,13 +3771,13 @@ ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
-
- if (adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_OFFLOAD) {
- if (ol_flags & ICE_TX_OFFLOAD_NOTSUP_MASK) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload is not supported");
- pkt_error = true;
- break;
- }
-
- if (!rte_validate_tx_offload(mb)) {
-- PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error\n");
-+ PMD_TX_LOG(ERR, "INVALID mbuf: TX offload setup error");
- pkt_error = true;
- break;
- }
@@ -8692 +8251 @@
-index d88d4065f1..357307b2e0 100644
+index a44497ce51..3ac65ca3b3 100644
@@ -8695 +8254 @@
-@@ -1155,10 +1155,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+@@ -1154,10 +1154,7 @@ eth_ixgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
@@ -8707 +8266 @@
-@@ -1783,7 +1780,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
+@@ -1782,7 +1779,7 @@ eth_ixgbe_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,
@@ -8825 +8384 @@
-index 91ba395ac3..e967fe5e48 100644
+index 0a0f639e39..002bc71c2a 100644
@@ -8828 +8387 @@
-@@ -173,14 +173,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
+@@ -171,14 +171,14 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
@@ -8845 +8404 @@
-@@ -193,7 +193,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
+@@ -191,7 +191,7 @@ ixgbe_add_tx_flow_control_drop_filter(struct rte_eth_dev *eth_dev)
@@ -8854 +8413 @@
-@@ -424,7 +424,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
+@@ -422,7 +422,7 @@ ixgbe_disable_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf)
@@ -8863 +8422 @@
-@@ -630,7 +630,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -628,7 +628,7 @@ ixgbe_negotiate_vf_api(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8872 +8431 @@
-@@ -679,7 +679,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -677,7 +677,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8881 +8440 @@
-@@ -713,7 +713,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -711,7 +711,7 @@ ixgbe_get_vf_queues(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8890 +8449 @@
-@@ -769,7 +769,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -767,7 +767,7 @@ ixgbe_set_vf_mc_promisc(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8899 +8458 @@
-@@ -806,7 +806,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
+@@ -804,7 +804,7 @@ ixgbe_set_vf_macvlan_msg(struct rte_eth_dev *dev, uint32_t vf, uint32_t *msgbuf)
@@ -8944 +8503 @@
-index 16da22b5c6..e220ffaf92 100644
+index 18377d9caf..f05f4c24df 100644
@@ -8957 +8516 @@
-index c19db5c0eb..9c2872429f 100644
+index a1a7e93288..7c0ac6888b 100644
@@ -9001 +8560 @@
-index e8dda8d460..29944f5070 100644
+index 4dced0d328..68b0a8b8ab 100644
@@ -9004 +8563 @@
-@@ -1119,7 +1119,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
+@@ -1067,7 +1067,7 @@ s32 ngbe_set_pcie_master(struct ngbe_hw *hw, bool enable)
@@ -9014 +8573 @@
-index 23a452cacd..6c45ffaad3 100644
+index fb86e7b10d..4321924cb9 100644
@@ -9017,3 +8576,3 @@
-@@ -382,7 +382,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
- err = ngbe_flash_read_dword(hw, 0xFFFDC, &ssid);
- if (err) {
+@@ -381,7 +381,7 @@ eth_ngbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+ ssid = ngbe_flash_read_dword(hw, 0xFFFDC);
+ if (ssid == 0x1) {
@@ -9076 +8635 @@
-index 45ea0b9c34..e84de5c1c7 100644
+index 9f11a2f317..8628edf8a7 100644
@@ -9079 +8638 @@
-@@ -170,7 +170,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
+@@ -139,7 +139,7 @@ cnxk_ep_xmit_pkts_scalar_mseg(struct rte_mbuf **tx_pkts, struct otx_ep_instr_que
@@ -9089 +8648 @@
-index 39b28de2d0..d44ac211f1 100644
+index ef275703c3..74b63a161f 100644
@@ -9147 +8706 @@
-index 2aeebb4675..76f72c64c9 100644
+index 7f4edf8dcf..fdab542246 100644
@@ -9232 +8791 @@
-index 73eb0c9d31..7d5dd91a77 100644
+index 82e57520d3..938c51b35d 100644
@@ -9235 +8794 @@
-@@ -120,7 +120,7 @@ union otx_ep_instr_irh {
+@@ -119,7 +119,7 @@ union otx_ep_instr_irh {
@@ -9245 +8804 @@
-index 46211361a0..c4a5a67c79 100644
+index 615cbbb648..c0298a56ac 100644
@@ -9248 +8807 @@
-@@ -175,7 +175,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
+@@ -118,7 +118,7 @@ otx_ep_dev_link_update(struct rte_eth_dev *eth_dev, int wait_to_complete)
@@ -9257 +8816 @@
-@@ -220,7 +220,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
+@@ -163,7 +163,7 @@ otx_ep_dev_set_default_mac_addr(struct rte_eth_dev *eth_dev,
@@ -9266 +8825 @@
-@@ -237,7 +237,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -180,7 +180,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9275 +8834 @@
-@@ -246,7 +246,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -189,7 +189,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9284 +8843 @@
-@@ -255,7 +255,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
+@@ -198,7 +198,7 @@ otx_ep_dev_start(struct rte_eth_dev *eth_dev)
@@ -9293 +8852 @@
-@@ -298,7 +298,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
+@@ -241,7 +241,7 @@ otx_ep_ism_setup(struct otx_ep_device *otx_epvf)
@@ -9302 +8861 @@
-@@ -342,12 +342,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
+@@ -285,12 +285,12 @@ otx_ep_chip_specific_setup(struct otx_ep_device *otx_epvf)
@@ -9317 +8876 @@
-@@ -361,7 +361,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -304,7 +304,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9326 +8885 @@
-@@ -385,7 +385,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -328,7 +328,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9335 +8894 @@
-@@ -393,7 +393,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
+@@ -336,7 +336,7 @@ otx_epdev_init(struct otx_ep_device *otx_epvf)
@@ -9344 +8903 @@
-@@ -413,10 +413,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
+@@ -356,10 +356,10 @@ otx_ep_dev_configure(struct rte_eth_dev *eth_dev)
@@ -9357 +8916 @@
-@@ -460,29 +460,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -403,29 +403,29 @@ otx_ep_rx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9392 +8951 @@
-@@ -511,7 +511,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
+@@ -454,7 +454,7 @@ otx_ep_rx_queue_release(struct rte_eth_dev *dev, uint16_t q_no)
@@ -9401 +8960 @@
-@@ -545,16 +545,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -488,16 +488,16 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9421 +8980 @@
-@@ -562,12 +562,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
+@@ -505,12 +505,12 @@ otx_ep_tx_queue_setup(struct rte_eth_dev *eth_dev, uint16_t q_no,
@@ -9436 +8995 @@
-@@ -660,23 +660,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
+@@ -603,23 +603,23 @@ otx_ep_dev_close(struct rte_eth_dev *eth_dev)
@@ -9465 +9024 @@
-@@ -692,7 +692,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
+@@ -635,7 +635,7 @@ otx_ep_dev_get_mac_addr(struct rte_eth_dev *eth_dev,
@@ -9474 +9033 @@
-@@ -741,22 +741,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
+@@ -684,22 +684,22 @@ static int otx_ep_eth_dev_query_set_vf_mac(struct rte_eth_dev *eth_dev,
@@ -9502,10 +9061 @@
-@@ -780,7 +780,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
-
- /* Parse devargs string */
- if (otx_ethdev_parse_devargs(eth_dev->device->devargs, otx_epvf)) {
-- otx_ep_err("Failed to parse devargs\n");
-+ otx_ep_err("Failed to parse devargs");
- return -EINVAL;
- }
-
-@@ -797,7 +797,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -734,7 +734,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9520 +9070 @@
-@@ -817,12 +817,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -754,12 +754,12 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9536 +9086 @@
-@@ -831,7 +831,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
+@@ -768,7 +768,7 @@ otx_ep_eth_dev_init(struct rte_eth_dev *eth_dev)
@@ -9665 +9215 @@
-index ec32ab087e..9680a59797 100644
+index c421ef0a1c..65a1f304e8 100644
@@ -9668 +9218 @@
-@@ -23,19 +23,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
+@@ -22,19 +22,19 @@ otx_ep_dmazone_free(const struct rte_memzone *mz)
@@ -9691 +9241 @@
-@@ -47,7 +47,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+@@ -46,7 +46,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
@@ -9700 +9250 @@
-@@ -69,7 +69,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
+@@ -68,7 +68,7 @@ otx_ep_delete_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no)
@@ -9709 +9259 @@
-@@ -95,7 +95,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -94,7 +94,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9718 +9268 @@
-@@ -103,7 +103,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -102,7 +102,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9727 +9277 @@
-@@ -118,7 +118,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -117,7 +117,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9736 +9286 @@
-@@ -126,7 +126,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -125,7 +125,7 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9745 +9295 @@
-@@ -134,14 +134,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
+@@ -133,14 +133,14 @@ otx_ep_init_instr_queue(struct otx_ep_device *otx_ep, int iq_no, int num_descs,
@@ -9762 +9312 @@
-@@ -187,12 +187,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
+@@ -185,12 +185,12 @@ otx_ep_setup_iqs(struct otx_ep_device *otx_ep, uint32_t iq_no, int num_descs,
@@ -9777 +9327 @@
-@@ -235,7 +235,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+@@ -233,7 +233,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
@@ -9786 +9336 @@
-@@ -255,7 +255,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
+@@ -253,7 +253,7 @@ otx_ep_delete_oqs(struct otx_ep_device *otx_ep, uint32_t oq_no)
@@ -9795 +9345 @@
-@@ -270,7 +270,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
+@@ -268,7 +268,7 @@ otx_ep_droq_setup_ring_buffers(struct otx_ep_droq *droq)
@@ -9804 +9354 @@
-@@ -324,7 +324,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+@@ -296,7 +296,7 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -9813 +9363 @@
-@@ -344,23 +344,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
+@@ -316,23 +316,23 @@ otx_ep_init_droq(struct otx_ep_device *otx_ep, uint32_t q_no,
@@ -9841 +9391 @@
-@@ -396,17 +396,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
+@@ -366,17 +366,17 @@ otx_ep_setup_oqs(struct otx_ep_device *otx_ep, int oq_no, int num_descs,
@@ -9862 +9412 @@
-@@ -431,12 +431,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
+@@ -401,12 +401,12 @@ otx_ep_iqreq_delete(struct otx_ep_instr_queue *iq, uint32_t idx)
@@ -9877 +9427 @@
-@@ -599,7 +599,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
+@@ -568,7 +568,7 @@ prepare_xmit_gather_list(struct otx_ep_instr_queue *iq, struct rte_mbuf *m, uint
@@ -9886 +9436 @@
-@@ -675,16 +675,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
+@@ -644,16 +644,16 @@ otx_ep_xmit_pkts(void *tx_queue, struct rte_mbuf **pkts, uint16_t nb_pkts)
@@ -9910 +9460 @@
-@@ -757,7 +757,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -726,7 +726,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -9919 +9469 @@
-@@ -766,7 +766,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -735,7 +735,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -9928 +9478 @@
-@@ -834,7 +834,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
+@@ -803,7 +803,7 @@ otx_ep_droq_read_packet(struct otx_ep_device *otx_ep, struct otx_ep_droq *droq,
@@ -10039 +9589 @@
-index 3c2154043c..c802b2c389 100644
+index 2a8378a33e..5f0cd1bb7f 100644
@@ -10079 +9629 @@
-index eccaaa2448..725ffcb2bc 100644
+index 0073dd7405..dc04a52639 100644
@@ -10103 +9653 @@
-@@ -583,16 +583,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+@@ -582,16 +582,16 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
@@ -10123 +9673 @@
-@@ -603,7 +603,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
+@@ -602,7 +602,7 @@ pfe_eth_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused)
@@ -10132 +9682 @@
-@@ -993,24 +993,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
+@@ -992,24 +992,24 @@ pmd_pfe_probe(struct rte_vdev_device *vdev)
@@ -10227 +9777 @@
-index ede5fc83e3..25e28fd9f6 100644
+index c35585f5fd..dcc8cbe943 100644
@@ -10499 +10049 @@
-index 609d95dcfa..4441a90bdf 100644
+index ba2ef4058e..ee563c55ce 100644
@@ -10502 +10052 @@
-@@ -1814,7 +1814,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
+@@ -1817,7 +1817,7 @@ nicvf_dev_start(struct rte_eth_dev *dev)
@@ -10512 +10062 @@
-index 2fabb9fc4e..2834468764 100644
+index ad29c3cfec..a8bdc10232 100644
@@ -10515,3 +10065,3 @@
-@@ -613,7 +613,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
- err = txgbe_flash_read_dword(hw, 0xFFFDC, &ssid);
- if (err) {
+@@ -612,7 +612,7 @@ eth_txgbe_dev_init(struct rte_eth_dev *eth_dev, void *init_params __rte_unused)
+ ssid = txgbe_flash_read_dword(hw, 0xFFFDC);
+ if (ssid == 0x1) {
@@ -10524 +10074 @@
-@@ -2762,7 +2762,7 @@ txgbe_dev_detect_sfp(void *param)
+@@ -2756,7 +2756,7 @@ txgbe_dev_detect_sfp(void *param)
@@ -10734,13 +10283,0 @@
-diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c
-index 3246b74e13..bc3e2a9af5 100644
---- a/drivers/net/virtio/virtio_user/vhost_vdpa.c
-+++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c
-@@ -670,7 +670,7 @@ vhost_vdpa_map_notification_area(struct virtio_user_dev *dev)
- notify_area[i] = mmap(NULL, page_size, PROT_WRITE, MAP_SHARED | MAP_FILE,
- data->vhostfd, i * page_size);
- if (notify_area[i] == MAP_FAILED) {
-- PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d\n",
-+ PMD_DRV_LOG(ERR, "(%s) Map failed for notify address of queue %d",
- dev->path, i);
- i--;
- goto map_err;
@@ -10748 +10285 @@
-index 48b872524a..e8642be86b 100644
+index 1bfd6aba80..d93d443ec9 100644
@@ -10751 +10288 @@
-@@ -1149,7 +1149,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
+@@ -1088,7 +1088,7 @@ virtio_user_dev_create_shadow_cvq(struct virtio_user_dev *dev, struct virtqueue
@@ -10761 +10298 @@
-index 467fb61137..78fac63ab6 100644
+index 70ae9c6035..f98cdb6d58 100644
@@ -10764 +10301 @@
-@@ -1095,10 +1095,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
+@@ -1094,10 +1094,10 @@ vmxnet3_dev_start(struct rte_eth_dev *dev)
@@ -10870 +10407 @@
-index a972b3b7a4..113a22b0a7 100644
+index f89bd3f9e2..997fbf8a0d 100644
@@ -10916 +10453 @@
-@@ -661,7 +661,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
+@@ -660,7 +660,7 @@ ifpga_rawdev_info_get(struct rte_rawdev *dev,
@@ -10925 +10462 @@
-@@ -816,13 +816,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
+@@ -815,13 +815,13 @@ fpga_pr(struct rte_rawdev *raw_dev, u32 port_id, const char *buffer, u32 size,
@@ -10941 +10478 @@
-@@ -846,14 +846,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -845,14 +845,14 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10959 +10496 @@
-@@ -864,7 +864,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -863,7 +863,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10968 +10505 @@
-@@ -880,7 +880,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
+@@ -879,7 +879,7 @@ rte_fpga_do_pr(struct rte_rawdev *rawdev, int port_id,
@@ -10977 +10514 @@
-@@ -923,7 +923,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
+@@ -922,7 +922,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
@@ -10986 +10523 @@
-@@ -954,7 +954,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
+@@ -953,7 +953,7 @@ ifpga_rawdev_pr(struct rte_rawdev *dev,
@@ -10995 +10532 @@
-@@ -1230,13 +1230,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
+@@ -1229,13 +1229,13 @@ fme_err_read_seu_emr(struct opae_manager *mgr)
@@ -11011 +10548 @@
-@@ -1251,7 +1251,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
+@@ -1250,7 +1250,7 @@ static int fme_clear_warning_intr(struct opae_manager *mgr)
@@ -11020 +10557 @@
-@@ -1263,14 +1263,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
+@@ -1262,14 +1262,14 @@ static int fme_clean_fme_error(struct opae_manager *mgr)
@@ -11037 +10574 @@
-@@ -1290,15 +1290,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
+@@ -1289,15 +1289,15 @@ fme_err_handle_error0(struct opae_manager *mgr)
@@ -11058 +10595 @@
-@@ -1321,17 +1321,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
+@@ -1320,17 +1320,17 @@ fme_err_handle_catfatal_error(struct opae_manager *mgr)
@@ -11082 +10619 @@
-@@ -1350,28 +1350,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
+@@ -1349,28 +1349,28 @@ fme_err_handle_nonfaterror(struct opae_manager *mgr)
@@ -11122 +10659 @@
-@@ -1381,7 +1381,7 @@ fme_interrupt_handler(void *param)
+@@ -1380,7 +1380,7 @@ fme_interrupt_handler(void *param)
@@ -11131 +10668 @@
-@@ -1407,7 +1407,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
+@@ -1406,7 +1406,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
@@ -11140 +10677 @@
-@@ -1417,7 +1417,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
+@@ -1416,7 +1416,7 @@ ifpga_unregister_msix_irq(struct ifpga_rawdev *dev, enum ifpga_irq_type type,
@@ -11149 +10686 @@
-@@ -1480,7 +1480,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1479,7 +1479,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -11158 +10695 @@
-@@ -1521,7 +1521,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1520,7 +1520,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'eal/x86: fix 32-bit write combining store' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (2 preceding siblings ...)
2024-11-11 6:26 ` patch 'drivers: remove redundant newline from logs' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix schedule type' " Xueming Li
` (116 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Radu Nicolau, Tyler Retzlaff, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f3f7310081a8db84e56dbeefa092c52874dedcc5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f3f7310081a8db84e56dbeefa092c52874dedcc5 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 14:27:57 +0100
Subject: [PATCH] eal/x86: fix 32-bit write combining store
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 41b09d64e35b877e8f29c4e5a8cf944e303695dd ]
The "movdiri" instruction is given as a series of bytes in rte_io.h so
that it works on compilers/assemblers which are unaware of the
instruction.
The REX prefix (0x40) on this instruction is invalid for 32-bit code,
causing issues.
Thankfully, the prefix is unnecessary in 64-bit code, since the data size
used is 32-bits.
Fixes: 8a00dfc738fe ("eal: add write combining store")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Radu Nicolau <radu.nicolau@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
---
lib/eal/x86/include/rte_io.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/eal/x86/include/rte_io.h b/lib/eal/x86/include/rte_io.h
index 0e1fefdee1..5366e09c47 100644
--- a/lib/eal/x86/include/rte_io.h
+++ b/lib/eal/x86/include/rte_io.h
@@ -24,7 +24,7 @@ __rte_x86_movdiri(uint32_t value, volatile void *addr)
{
asm volatile(
/* MOVDIRI */
- ".byte 0x40, 0x0f, 0x38, 0xf9, 0x02"
+ ".byte 0x0f, 0x38, 0xf9, 0x02"
:
: "a" (value), "d" (addr));
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.736952027 +0800
+++ 0004-eal-x86-fix-32-bit-write-combining-store.patch 2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From 41b09d64e35b877e8f29c4e5a8cf944e303695dd Mon Sep 17 00:00:00 2001
+From f3f7310081a8db84e56dbeefa092c52874dedcc5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 41b09d64e35b877e8f29c4e5a8cf944e303695dd ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/event: fix schedule type' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (3 preceding siblings ...)
2024-11-11 6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix target event queue' " Xueming Li
` (115 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=241ffcb0a722c7e556f3a6fa6de5bb89dccf6f51
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 241ffcb0a722c7e556f3a6fa6de5bb89dccf6f51 Mon Sep 17 00:00:00 2001
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date: Wed, 24 Jul 2024 01:02:12 +0530
Subject: [PATCH] test/event: fix schedule type
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit adadb5585bd50260c3fa5495fcbe8baf64386f7e ]
Missing schedule type assignment might set it to
incorrect value, set it to SCHED_TYPE_PARALLEL.
Fixes: d007a7f39de3 ("eventdev: introduce link profiles")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
app/test/test_eventdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c
index e4e234dc98..9a6c8f470c 100644
--- a/app/test/test_eventdev.c
+++ b/app/test/test_eventdev.c
@@ -1189,6 +1189,7 @@ test_eventdev_profile_switch(void)
ev.op = RTE_EVENT_OP_NEW;
ev.flow_id = 0;
ev.u64 = 0xBADF00D0;
+ ev.sched_type = RTE_SCHED_TYPE_PARALLEL;
rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1);
TEST_ASSERT(rc == 1, "Failed to enqueue event");
ev.queue_id = 1;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.773760326 +0800
+++ 0005-test-event-fix-schedule-type.patch 2024-11-11 14:23:05.002192842 +0800
@@ -1 +1 @@
-From adadb5585bd50260c3fa5495fcbe8baf64386f7e Mon Sep 17 00:00:00 2001
+From 241ffcb0a722c7e556f3a6fa6de5bb89dccf6f51 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit adadb5585bd50260c3fa5495fcbe8baf64386f7e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/event: fix target event queue' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (4 preceding siblings ...)
2024-11-11 6:26 ` patch 'test/event: fix schedule type' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
` (114 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: xuemingl, Amit Prakash Shukla, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=638e0139f654a52d374517d41797988ec47f63f3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 638e0139f654a52d374517d41797988ec47f63f3 Mon Sep 17 00:00:00 2001
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date: Thu, 22 Aug 2024 02:07:31 +0530
Subject: [PATCH] test/event: fix target event queue
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 367fa3504851ec6c4aef393a7c53638da45a903e ]
In OP_FWD mode, if internal port is supported, the target event queue
should be the TEST_APP_EV_QUEUE_ID.
Fixes: a276e7c8fbb3 ("test/event: add DMA adapter auto-test")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Tested-by: Amit Prakash Shukla <amitprakashs@marvell.com>
Acked-by: Amit Prakash Shukla <amitprakashs@marvell.com>
---
app/test/test_event_dma_adapter.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/app/test/test_event_dma_adapter.c b/app/test/test_event_dma_adapter.c
index 35b417b69f..de0d671d3f 100644
--- a/app/test/test_event_dma_adapter.c
+++ b/app/test/test_event_dma_adapter.c
@@ -276,7 +276,10 @@ test_op_forward_mode(void)
memset(&ev[i], 0, sizeof(struct rte_event));
ev[i].event = 0;
ev[i].event_type = RTE_EVENT_TYPE_DMADEV;
- ev[i].queue_id = TEST_DMA_EV_QUEUE_ID;
+ if (params.internal_port_op_fwd)
+ ev[i].queue_id = TEST_APP_EV_QUEUE_ID;
+ else
+ ev[i].queue_id = TEST_DMA_EV_QUEUE_ID;
ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC;
ev[i].flow_id = 0xAABB;
ev[i].event_ptr = op;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.808321426 +0800
+++ 0006-test-event-fix-target-event-queue.patch 2024-11-11 14:23:05.012192842 +0800
@@ -1 +1 @@
-From 367fa3504851ec6c4aef393a7c53638da45a903e Mon Sep 17 00:00:00 2001
+From 638e0139f654a52d374517d41797988ec47f63f3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 367fa3504851ec6c4aef393a7c53638da45a903e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 3b39521153..9988d4fc7b 100644
+index 35b417b69f..de0d671d3f 100644
@@ -23 +25,2 @@
-@@ -271,7 +271,10 @@ test_op_forward_mode(void)
+@@ -276,7 +276,10 @@ test_op_forward_mode(void)
+ memset(&ev[i], 0, sizeof(struct rte_event));
@@ -25 +27,0 @@
- ev[i].op = RTE_EVENT_OP_NEW;
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/eventdev: fix queue crash with generic pipeline' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (5 preceding siblings ...)
2024-11-11 6:26 ` patch 'test/event: fix target event queue' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
` (113 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, Chenxingyu Wang, Pavan Nikhilesh, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=993e9b6fdf97d1ae2b30766c7b273c3e80de94a5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 993e9b6fdf97d1ae2b30766c7b273c3e80de94a5 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Wed, 18 Sep 2024 06:41:42 +0000
Subject: [PATCH] examples/eventdev: fix queue crash with generic pipeline
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f6f2307931c90d924405ea44b0b4be9d3d01bd17 ]
There was a segmentation fault when executing eventdev_pipeline with
command [1] with ConnectX-5 NIC card:
0x000000000079208c in rte_eth_tx_buffer (tx_pkt=0x16f8ed300, buffer=0x100,
queue_id=11, port_id=0) at
../lib/ethdev/rte_ethdev.h:6636
txa_service_tx (txa=0x17b19d080, ev=0xffffffffe500, n=4) at
../lib/eventdev/rte_event_eth_tx_adapter.c:631
0x0000000000792234 in txa_service_func (args=0x17b19d080) at
../lib/eventdev/rte_event_eth_tx_adapter.c:666
0x00000000008b0784 in service_runner_do_callback (s=0x17fffe100,
cs=0x17ffb5f80, service_idx=2) at
../lib/eal/common/rte_service.c:405
0x00000000008b0ad8 in service_run (i=2, cs=0x17ffb5f80,
service_mask=18446744073709551615, s=0x17fffe100,
serialize_mt_unsafe=0) at
../lib/eal/common/rte_service.c:441
0x00000000008b0c68 in rte_service_run_iter_on_app_lcore (id=2,
serialize_mt_unsafe=0) at
../lib/eal/common/rte_service.c:477
0x000000000057bcc4 in schedule_devices (lcore_id=0) at
../examples/eventdev_pipeline/pipeline_common.h:138
0x000000000057ca94 in worker_generic_burst (arg=0x17b131e80) at
../examples/eventdev_pipeline/
pipeline_worker_generic.c:83
0x00000000005794a8 in main (argc=11, argv=0xfffffffff470) at
../examples/eventdev_pipeline/main.c:449
The root cause is that the queue_id (11) is invalid, the queue_id comes
from mbuf.hash.txadapter.txq which may pre-write by NIC driver when
receiving packets (e.g. pre-write mbuf.hash.fdir.hi field).
Because this example only enabled one ethdev queue, so fixes it by reset
txq to zero in the first worker stage.
[1] dpdk-eventdev_pipeline -l 0-48 --vdev event_sw0 -- -r1 -t1 -e1 -w ff0
-s5 -n0 -c32 -W1000 -D
When launch eventdev_pipeline with command [1], event_sw
Fixes: 81fb40f95c82 ("examples/eventdev: add generic worker pipeline")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Chenxingyu Wang <wangchenxingyu@huawei.com>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
.mailmap | 1 +
examples/eventdev_pipeline/pipeline_worker_generic.c | 12 ++++++++----
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/.mailmap b/.mailmap
index f2883144f3..7aa2c27226 100644
--- a/.mailmap
+++ b/.mailmap
@@ -229,6 +229,7 @@ Cheng Peng <cheng.peng5@zte.com.cn>
Chengwen Feng <fengchengwen@huawei.com>
Chenmin Sun <chenmin.sun@intel.com>
Chenming Chang <ccm@ccm.ink>
+Chenxingyu Wang <wangchenxingyu@huawei.com>
Chenxu Di <chenxux.di@intel.com>
Chenyu Huang <chenyux.huang@intel.com>
Cheryl Houser <chouser@vmware.com>
diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c
index 783f68c91e..831d7fd53d 100644
--- a/examples/eventdev_pipeline/pipeline_worker_generic.c
+++ b/examples/eventdev_pipeline/pipeline_worker_generic.c
@@ -38,10 +38,12 @@ worker_generic(void *arg)
}
received++;
- /* The first worker stage does classification */
- if (ev.queue_id == cdata.qid[0])
+ /* The first worker stage does classification and sets txq. */
+ if (ev.queue_id == cdata.qid[0]) {
ev.flow_id = ev.mbuf->hash.rss
% cdata.num_fids;
+ rte_event_eth_tx_adapter_txq_set(ev.mbuf, 0);
+ }
ev.queue_id = cdata.next_qid[ev.queue_id];
ev.op = RTE_EVENT_OP_FORWARD;
@@ -96,10 +98,12 @@ worker_generic_burst(void *arg)
for (i = 0; i < nb_rx; i++) {
- /* The first worker stage does classification */
- if (events[i].queue_id == cdata.qid[0])
+ /* The first worker stage does classification and sets txq. */
+ if (events[i].queue_id == cdata.qid[0]) {
events[i].flow_id = events[i].mbuf->hash.rss
% cdata.num_fids;
+ rte_event_eth_tx_adapter_txq_set(events[i].mbuf, 0);
+ }
events[i].queue_id = cdata.next_qid[events[i].queue_id];
events[i].op = RTE_EVENT_OP_FORWARD;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.843270025 +0800
+++ 0007-examples-eventdev-fix-queue-crash-with-generic-pipel.patch 2024-11-11 14:23:05.012192842 +0800
@@ -1 +1 @@
-From f6f2307931c90d924405ea44b0b4be9d3d01bd17 Mon Sep 17 00:00:00 2001
+From 993e9b6fdf97d1ae2b30766c7b273c3e80de94a5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f6f2307931c90d924405ea44b0b4be9d3d01bd17 ]
@@ -46 +48,0 @@
-Cc: stable@dpdk.org
@@ -57 +59 @@
-index 94fa73aa36..8a832ba4be 100644
+index f2883144f3..7aa2c27226 100644
@@ -60 +62 @@
-@@ -236,6 +236,7 @@ Cheng Peng <cheng.peng5@zte.com.cn>
+@@ -229,6 +229,7 @@ Cheng Peng <cheng.peng5@zte.com.cn>
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/dpaa2_sec: fix memory leak' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (6 preceding siblings ...)
2024-11-11 6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
` (112 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Gagandeep Singh; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=71c5928f9b78def6b0a13fd7e71f5505428b3dd6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 71c5928f9b78def6b0a13fd7e71f5505428b3dd6 Mon Sep 17 00:00:00 2001
From: Gagandeep Singh <g.singh@nxp.com>
Date: Tue, 6 Aug 2024 15:57:26 +0530
Subject: [PATCH] crypto/dpaa2_sec: fix memory leak
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9c0abd27c3fe7a8b842d6fc254ac1241f4ba8b65 ]
fixing memory leak while creating the PDCP session
with invalid data.
Fixes: bef594ec5cc8 ("crypto/dpaa2_sec: support PDCP offload")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
---
drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
index b65bea3b3f..bd5590c02d 100644
--- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
+++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c
@@ -3466,6 +3466,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
}
} else {
DPAA2_SEC_ERR("Invalid crypto type");
+ rte_free(priv);
return -EINVAL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.876324924 +0800
+++ 0008-crypto-dpaa2_sec-fix-memory-leak.patch 2024-11-11 14:23:05.012192842 +0800
@@ -1 +1 @@
-From 9c0abd27c3fe7a8b842d6fc254ac1241f4ba8b65 Mon Sep 17 00:00:00 2001
+From 71c5928f9b78def6b0a13fd7e71f5505428b3dd6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9c0abd27c3fe7a8b842d6fc254ac1241f4ba8b65 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 2cdf9308f8..e4109e8f0a 100644
+index b65bea3b3f..bd5590c02d 100644
@@ -21 +23 @@
-@@ -3422,6 +3422,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
+@@ -3466,6 +3466,7 @@ dpaa2_sec_set_pdcp_session(struct rte_cryptodev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (7 preceding siblings ...)
2024-11-11 6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
` (111 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Varun Sethi; +Cc: xuemingl, Gagandeep Singh, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5b8d264d559366660fcf5db9bf31db47f41708f0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5b8d264d559366660fcf5db9bf31db47f41708f0 Mon Sep 17 00:00:00 2001
From: Varun Sethi <v.sethi@nxp.com>
Date: Tue, 6 Aug 2024 15:57:27 +0530
Subject: [PATCH] common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2369bc1343fa5aac2890b2a3e12d65a2f1a2fd31 ]
Adding a Jump instruction with CALM flag to ensure
previous processing has been completed.
Fixes: 8827d94398f1 ("crypto/dpaa2_sec/hw: support AES-AES 18-bit PDCP")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Signed-off-by: Varun Sethi <v.sethi@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/common/dpaax/caamflib/desc/pdcp.h | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/drivers/common/dpaax/caamflib/desc/pdcp.h b/drivers/common/dpaax/caamflib/desc/pdcp.h
index 0ed9eec816..27dd5c4347 100644
--- a/drivers/common/dpaax/caamflib/desc/pdcp.h
+++ b/drivers/common/dpaax/caamflib/desc/pdcp.h
@@ -1220,6 +1220,11 @@ pdcp_insert_cplane_snow_aes_op(struct program *p,
SEQFIFOLOAD(p, MSG1, 0, VLF | LAST1 | FLUSH1);
MOVEB(p, CONTEXT1, 0, MATH3, 0, 4, WAITCOMP | IMMED);
+ /* conditional jump with calm added to ensure that the
+ * previous processing has been completed
+ */
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
@@ -1921,6 +1926,11 @@ pdcp_insert_cplane_zuc_aes_op(struct program *p,
MOVEB(p, OFIFO, 0, MATH3, 0, 4, IMMED);
+ /* conditional jump with calm added to ensure that the
+ * previous processing has been completed
+ */
+ JUMP(p, 1, LOCAL_JUMP, ALL_TRUE, CALM);
+
LOAD(p, CLRW_RESET_CLS1_CHA |
CLRW_CLR_C1KEY |
CLRW_CLR_C1CTX |
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.912654224 +0800
+++ 0009-common-dpaax-caamflib-fix-PDCP-SNOW-ZUC-watchdog.patch 2024-11-11 14:23:05.022192842 +0800
@@ -1 +1 @@
-From 2369bc1343fa5aac2890b2a3e12d65a2f1a2fd31 Mon Sep 17 00:00:00 2001
+From 5b8d264d559366660fcf5db9bf31db47f41708f0 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2369bc1343fa5aac2890b2a3e12d65a2f1a2fd31 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index bc35114cf4..9ada3905c5 100644
+index 0ed9eec816..27dd5c4347 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'dev: fix callback lookup when unregistering device' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (8 preceding siblings ...)
2024-11-11 6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
` (110 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Malcolm Bumgardner; +Cc: xuemingl, Long Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=310058b8699eb2833fe0b8093a355f29fcea1b67
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 310058b8699eb2833fe0b8093a355f29fcea1b67 Mon Sep 17 00:00:00 2001
From: Malcolm Bumgardner <mbumgard@cisco.com>
Date: Thu, 18 Jul 2024 12:37:28 -0700
Subject: [PATCH] dev: fix callback lookup when unregistering device
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 66fd2cc2e47c69ee57f0fe32558e55b085c2e32d ]
In the device event unregister code, it unconditionally removes all
callbacks which are registered with device_name set to NULL.
This results in many callbacks incorrectly removed.
Fix this by only removing callbacks with matching cb_fn and cb_arg.
Fixes: a753e53d517b ("eal: add device event monitor framework")
Signed-off-by: Malcolm Bumgardner <mbumgard@cisco.com>
Signed-off-by: Long Li <longli@microsoft.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
.mailmap | 1 +
lib/eal/common/eal_common_dev.c | 13 +++++++------
2 files changed, 8 insertions(+), 6 deletions(-)
diff --git a/.mailmap b/.mailmap
index 7aa2c27226..4b8a131d55 100644
--- a/.mailmap
+++ b/.mailmap
@@ -860,6 +860,7 @@ Mahesh Adulla <mahesh.adulla@amd.com>
Mahipal Challa <mchalla@marvell.com>
Mah Yock Gen <yock.gen.mah@intel.com>
Mairtin o Loingsigh <mairtin.oloingsigh@intel.com>
+Malcolm Bumgardner <mbumgard@cisco.com>
Mallesham Jatharakonda <mjatharakonda@oneconvergence.com>
Mallesh Koujalagi <malleshx.koujalagi@intel.com>
Malvika Gupta <malvika.gupta@arm.com>
diff --git a/lib/eal/common/eal_common_dev.c b/lib/eal/common/eal_common_dev.c
index 614ef6c9fc..bc53b2e28d 100644
--- a/lib/eal/common/eal_common_dev.c
+++ b/lib/eal/common/eal_common_dev.c
@@ -550,16 +550,17 @@ rte_dev_event_callback_unregister(const char *device_name,
next = TAILQ_NEXT(event_cb, next);
if (device_name != NULL && event_cb->dev_name != NULL) {
- if (!strcmp(event_cb->dev_name, device_name)) {
- if (event_cb->cb_fn != cb_fn ||
- (cb_arg != (void *)-1 &&
- event_cb->cb_arg != cb_arg))
- continue;
- }
+ if (strcmp(event_cb->dev_name, device_name))
+ continue;
} else if (device_name != NULL) {
continue;
}
+ /* Remove only matching callback with arg */
+ if (event_cb->cb_fn != cb_fn ||
+ (cb_arg != (void *)-1 && event_cb->cb_arg != cb_arg))
+ continue;
+
/*
* if this callback is not executing right now,
* then remove it.
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.948838523 +0800
+++ 0010-dev-fix-callback-lookup-when-unregistering-device.patch 2024-11-11 14:23:05.022192842 +0800
@@ -1 +1 @@
-From 66fd2cc2e47c69ee57f0fe32558e55b085c2e32d Mon Sep 17 00:00:00 2001
+From 310058b8699eb2833fe0b8093a355f29fcea1b67 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 66fd2cc2e47c69ee57f0fe32558e55b085c2e32d ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index a66da3c8cb..8004772125 100644
+index 7aa2c27226..4b8a131d55 100644
@@ -27,2 +29,2 @@
-@@ -886,6 +886,7 @@ Mahipal Challa <mchalla@marvell.com>
- Mahmoud Maatuq <mahmoudmatook.mm@gmail.com>
+@@ -860,6 +860,7 @@ Mahesh Adulla <mahesh.adulla@amd.com>
+ Mahipal Challa <mchalla@marvell.com>
@@ -36 +38 @@
-index a99252b02f..70aa04dcd9 100644
+index 614ef6c9fc..bc53b2e28d 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/scheduler: fix session size computation' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (9 preceding siblings ...)
2024-11-11 6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
` (109 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Julien Hascoet; +Cc: xuemingl, Kai Ji, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a082f249748fac379ce9e996ba8bf9a555b89a10
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a082f249748fac379ce9e996ba8bf9a555b89a10 Mon Sep 17 00:00:00 2001
From: Julien Hascoet <ju.hascoet@gmail.com>
Date: Fri, 5 Jul 2024 14:57:56 +0200
Subject: [PATCH] crypto/scheduler: fix session size computation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b00bf84f0d3eb4c6a2944c918f697dc17cb3fce5 ]
The crypto scheduler session size computation was taking
into account only the worker session sizes and not its own.
Fixes: e2af4e403c15 ("crypto/scheduler: support DOCSIS security protocol")
Signed-off-by: Julien Hascoet <ju.hascoet@gmail.com>
Acked-by: Kai Ji <kai.ji@intel.com>
---
.mailmap | 1 +
drivers/crypto/scheduler/scheduler_pmd_ops.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 4b8a131d55..8b9e849d05 100644
--- a/.mailmap
+++ b/.mailmap
@@ -711,6 +711,7 @@ Julien Aube <julien_dpdk@jaube.fr>
Julien Castets <jcastets@scaleway.com>
Julien Courtat <julien.courtat@6wind.com>
Julien Cretin <julien.cretin@trust-in-soft.com>
+Julien Hascoet <ju.hascoet@gmail.com>
Julien Massonneau <julien.massonneau@6wind.com>
Julien Meunier <julien.meunier@nokia.com> <julien.meunier@6wind.com>
Július Milan <jmilan.dev@gmail.com>
diff --git a/drivers/crypto/scheduler/scheduler_pmd_ops.c b/drivers/crypto/scheduler/scheduler_pmd_ops.c
index a18f7a08b0..6e43438469 100644
--- a/drivers/crypto/scheduler/scheduler_pmd_ops.c
+++ b/drivers/crypto/scheduler/scheduler_pmd_ops.c
@@ -185,7 +185,7 @@ scheduler_session_size_get(struct scheduler_ctx *sched_ctx,
uint8_t session_type)
{
uint8_t i = 0;
- uint32_t max_priv_sess_size = 0;
+ uint32_t max_priv_sess_size = sizeof(struct scheduler_session_ctx);
/* Check what is the maximum private session size for all workers */
for (i = 0; i < sched_ctx->nb_workers; i++) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:05.999628822 +0800
+++ 0011-crypto-scheduler-fix-session-size-computation.patch 2024-11-11 14:23:05.022192842 +0800
@@ -1 +1 @@
-From b00bf84f0d3eb4c6a2944c918f697dc17cb3fce5 Mon Sep 17 00:00:00 2001
+From a082f249748fac379ce9e996ba8bf9a555b89a10 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b00bf84f0d3eb4c6a2944c918f697dc17cb3fce5 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 8004772125..15d9b61029 100644
+index 4b8a131d55..8b9e849d05 100644
@@ -23 +25 @@
-@@ -734,6 +734,7 @@ Julien Aube <julien_dpdk@jaube.fr>
+@@ -711,6 +711,7 @@ Julien Aube <julien_dpdk@jaube.fr>
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (10 preceding siblings ...)
2024-11-11 6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
` (108 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Tejasree Kondoj; +Cc: xuemingl, Akhil Goyal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e3875312dbf74eeec02d8460ae4dd2f35bc2b464
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e3875312dbf74eeec02d8460ae4dd2f35bc2b464 Mon Sep 17 00:00:00 2001
From: Tejasree Kondoj <ktejasree@marvell.com>
Date: Fri, 13 Sep 2024 12:37:26 +0530
Subject: [PATCH] examples/ipsec-secgw: fix dequeue count from cryptodev
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 88948ff31f57618a74c8985c59e332676995b438 ]
Setting dequeue packet count to max of MAX_PKT_BURST
size instead of MAX_PKTS.
Dequeue from cryptodev is called with MAX_PKTS but
routing functions allocate hop/dst_ip arrays of
size MAX_PKT_BURST. This can corrupt stack causing
stack smashing error when more than MAX_PKT_BURST
packets are returned from cryptodev.
Fixes: a2b445b810ac ("examples/ipsec-secgw: allow larger burst size for vectors")
Signed-off-by: Tejasree Kondoj <ktejasree@marvell.com>
Acked-by: Akhil Goyal <gakhil@marvell.com>
---
examples/ipsec-secgw/ipsec-secgw.c | 6 ++++--
examples/ipsec-secgw/ipsec_process.c | 3 ++-
2 files changed, 6 insertions(+), 3 deletions(-)
diff --git a/examples/ipsec-secgw/ipsec-secgw.c b/examples/ipsec-secgw/ipsec-secgw.c
index 761b9cf396..5e77d9d2ce 100644
--- a/examples/ipsec-secgw/ipsec-secgw.c
+++ b/examples/ipsec-secgw/ipsec-secgw.c
@@ -626,12 +626,13 @@ drain_inbound_crypto_queues(const struct lcore_conf *qconf,
uint32_t n;
struct ipsec_traffic trf;
unsigned int lcoreid = rte_lcore_id();
+ const int nb_pkts = RTE_DIM(trf.ipsec.pkts);
if (app_sa_prm.enable == 0) {
/* dequeue packets from crypto-queue */
n = ipsec_inbound_cqp_dequeue(ctx, trf.ipsec.pkts,
- RTE_DIM(trf.ipsec.pkts));
+ RTE_MIN(MAX_PKT_BURST, nb_pkts));
trf.ip4.num = 0;
trf.ip6.num = 0;
@@ -663,12 +664,13 @@ drain_outbound_crypto_queues(const struct lcore_conf *qconf,
{
uint32_t n;
struct ipsec_traffic trf;
+ const int nb_pkts = RTE_DIM(trf.ipsec.pkts);
if (app_sa_prm.enable == 0) {
/* dequeue packets from crypto-queue */
n = ipsec_outbound_cqp_dequeue(ctx, trf.ipsec.pkts,
- RTE_DIM(trf.ipsec.pkts));
+ RTE_MIN(MAX_PKT_BURST, nb_pkts));
trf.ip4.num = 0;
trf.ip6.num = 0;
diff --git a/examples/ipsec-secgw/ipsec_process.c b/examples/ipsec-secgw/ipsec_process.c
index b0cece3ad1..1a64a4b49f 100644
--- a/examples/ipsec-secgw/ipsec_process.c
+++ b/examples/ipsec-secgw/ipsec_process.c
@@ -336,6 +336,7 @@ ipsec_cqp_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
struct rte_ipsec_session *ss;
struct traffic_type *out;
struct rte_ipsec_group *pg;
+ const int nb_cops = RTE_DIM(trf->ipsec.pkts);
struct rte_crypto_op *cop[RTE_DIM(trf->ipsec.pkts)];
struct rte_ipsec_group grp[RTE_DIM(trf->ipsec.pkts)];
@@ -345,7 +346,7 @@ ipsec_cqp_process(struct ipsec_ctx *ctx, struct ipsec_traffic *trf)
out = &trf->ipsec;
/* dequeue completed crypto-ops */
- n = ctx_dequeue(ctx, cop, RTE_DIM(cop));
+ n = ctx_dequeue(ctx, cop, RTE_MIN(MAX_PKT_BURST, nb_cops));
if (n == 0)
return;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.037734621 +0800
+++ 0012-examples-ipsec-secgw-fix-dequeue-count-from-cryptode.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 88948ff31f57618a74c8985c59e332676995b438 Mon Sep 17 00:00:00 2001
+From e3875312dbf74eeec02d8460ae4dd2f35bc2b464 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 88948ff31f57618a74c8985c59e332676995b438 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index e98ad2572e..063cc8768e 100644
+index 761b9cf396..5e77d9d2ce 100644
@@ -60 +62 @@
-index ddbe30745b..5080e810e0 100644
+index b0cece3ad1..1a64a4b49f 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bpf: fix free function mismatch if convert fails' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (11 preceding siblings ...)
2024-11-11 6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
@ 2024-11-11 6:26 ` Xueming Li
2024-11-11 6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
` (107 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:26 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d4c099c6fcf9849630f3b7f930fc193d3ef54e6c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d4c099c6fcf9849630f3b7f930fc193d3ef54e6c Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:11 -0700
Subject: [PATCH] bpf: fix free function mismatch if convert fails
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a3923d6bd5c0b9838d8f4678233093ffad036193 ]
If conversion of cBF to eBPF fails then an object allocated with
rte_malloc() would be passed to free().
[908/3201] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o
../lib/bpf/bpf_convert.c: In function ‘rte_bpf_convert’:
../lib/bpf/bpf_convert.c:559:17:
warning: ‘free’ called on pointer returned from a mismatched
allocation function [-Wmismatched-dealloc]
559 | free(prm);
| ^~~~~~~~~
../lib/bpf/bpf_convert.c:545:15: note: returned from ‘rte_zmalloc’
545 | prm = rte_zmalloc("bpf_filter",
| ^~~~~~~~~~~~~~~~~~~~~~~~~
546 | sizeof(*prm) + ebpf_len * sizeof(*ebpf), 0);
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 2eccf6afbea9 ("bpf: add function to convert classic BPF to DPDK BPF")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
lib/bpf/bpf_convert.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/bpf/bpf_convert.c b/lib/bpf/bpf_convert.c
index d441be6663..cb400a4ffb 100644
--- a/lib/bpf/bpf_convert.c
+++ b/lib/bpf/bpf_convert.c
@@ -556,7 +556,7 @@ rte_bpf_convert(const struct bpf_program *prog)
ret = bpf_convert_filter(prog->bf_insns, prog->bf_len, ebpf, &ebpf_len);
if (ret < 0) {
RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__);
- free(prm);
+ rte_free(prm);
rte_errno = -ret;
return NULL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.070564820 +0800
+++ 0013-bpf-fix-free-function-mismatch-if-convert-fails.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From a3923d6bd5c0b9838d8f4678233093ffad036193 Mon Sep 17 00:00:00 2001
+From d4c099c6fcf9849630f3b7f930fc193d3ef54e6c Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a3923d6bd5c0b9838d8f4678233093ffad036193 ]
@@ -26 +28,0 @@
-Cc: stable@dpdk.org
@@ -37 +39 @@
-index d7ff2b4325..e7e298c9cb 100644
+index d441be6663..cb400a4ffb 100644
@@ -43 +45 @@
- RTE_BPF_LOG_LINE(ERR, "%s: cannot convert cBPF to eBPF", __func__);
+ RTE_BPF_LOG(ERR, "%s: cannot convert cBPF to eBPF\n", __func__);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'baseband/la12xx: fix use after free in modem config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (12 preceding siblings ...)
2024-11-11 6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
` (106 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Hemant Agrawal, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e90be36798669790e22e49aa3db399630e8a4f48
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e90be36798669790e22e49aa3db399630e8a4f48 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:19 -0700
Subject: [PATCH] baseband/la12xx: fix use after free in modem config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6ffb34498913f84713e98d6a2a21d2a86028a604 ]
The info pointer (hp) could get freed twice.
Fix by nulling after free.
In function 'setup_la12xx_dev',
inlined from 'la12xx_bbdev_create' at
../drivers/baseband/la12xx/bbdev_la12xx.c:1029:8,
inlined from 'la12xx_bbdev_probe' at
../drivers/baseband/la12xx/bbdev_la12xx.c:1075:9:
../drivers/baseband/la12xx/bbdev_la12xx.c:901:9:
error: pointer 'hp_info' may be used after 'rte_free'
[-Werror=use-after-free]
901 | rte_free(hp);
| ^~~~~~~~~~~~
../drivers/baseband/la12xx/bbdev_la12xx.c:791:17:
note: call to 'rte_free' here
791 | rte_free(hp);
| ^~~~~~~~~~~~
Fixes: 24d0ba22546e ("baseband/la12xx: add queue and modem config")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/baseband/la12xx/bbdev_la12xx.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/baseband/la12xx/bbdev_la12xx.c b/drivers/baseband/la12xx/bbdev_la12xx.c
index af4b4f1e9a..2432cdf884 100644
--- a/drivers/baseband/la12xx/bbdev_la12xx.c
+++ b/drivers/baseband/la12xx/bbdev_la12xx.c
@@ -789,6 +789,7 @@ setup_la12xx_dev(struct rte_bbdev *dev)
ipc_priv->hugepg_start.size = hp->len;
rte_free(hp);
+ hp = NULL;
}
dev_ipc = open_ipc_dev(priv->modem_id);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.104682320 +0800
+++ 0014-baseband-la12xx-fix-use-after-free-in-modem-config.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 6ffb34498913f84713e98d6a2a21d2a86028a604 Mon Sep 17 00:00:00 2001
+From e90be36798669790e22e49aa3db399630e8a4f48 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6ffb34498913f84713e98d6a2a21d2a86028a604 ]
@@ -28 +30,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/qat: fix use after free in device probe' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (13 preceding siblings ...)
2024-11-11 6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
` (105 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=67197a5768eb0b2579058cbf1862eba4c43537aa
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 67197a5768eb0b2579058cbf1862eba4c43537aa Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:17 -0700
Subject: [PATCH] common/qat: fix use after free in device probe
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1af60a8ce25a4a1a2ae1da6c00f432ce89a4c2eb ]
Checking return value of rte_memzone_free() is pointless
and if it failed then it was because the pointer was null.
Fixes: 7b1374b1e6e7 ("common/qat: limit configuration to primary process")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/common/qat/qat_device.c | 6 +-----
1 file changed, 1 insertion(+), 5 deletions(-)
diff --git a/drivers/common/qat/qat_device.c b/drivers/common/qat/qat_device.c
index eceb5c89c4..6901fb3aab 100644
--- a/drivers/common/qat/qat_device.c
+++ b/drivers/common/qat/qat_device.c
@@ -335,11 +335,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
return qat_dev;
error:
- if (rte_memzone_free(qat_dev_mz)) {
- QAT_LOG(DEBUG,
- "QAT internal error! Trying to free already allocated memzone: %s",
- qat_dev_mz->name);
- }
+ rte_memzone_free(qat_dev_mz);
return NULL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.137913819 +0800
+++ 0015-common-qat-fix-use-after-free-in-device-probe.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 1af60a8ce25a4a1a2ae1da6c00f432ce89a4c2eb Mon Sep 17 00:00:00 2001
+From 67197a5768eb0b2579058cbf1862eba4c43537aa Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1af60a8ce25a4a1a2ae1da6c00f432ce89a4c2eb ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 4a972a83bd..bca88fd9bd 100644
+index eceb5c89c4..6901fb3aab 100644
@@ -27 +29,2 @@
-@@ -390,11 +390,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev)
+@@ -335,11 +335,7 @@ qat_pci_device_allocate(struct rte_pci_device *pci_dev,
+
@@ -30 +32,0 @@
- rte_free(qat_dev->command_line);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/idpf: fix use after free in mailbox init' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (14 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
` (104 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=91f32226a7208deb90b1594cfeb769399b315687
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 91f32226a7208deb90b1594cfeb769399b315687 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:20 -0700
Subject: [PATCH] common/idpf: fix use after free in mailbox init
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4baf54ed9dc87b89ea2150578c51120bc0157bb0 ]
The macro in this driver was redefining LIST_FOR_EACH_ENTRY_SAFE
as a simple LIST_FOR_EACH macro.
But they are not the same the _SAFE variant guarantees that
there will not be use after free.
Fixes: fb4ac04e9bfa ("common/idpf: introduce common library")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/common/idpf/base/idpf_osdep.h | 10 ++++++++--
drivers/common/idpf/idpf_common_device.c | 3 +--
2 files changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/common/idpf/base/idpf_osdep.h b/drivers/common/idpf/base/idpf_osdep.h
index 74a376cb13..581a36cc40 100644
--- a/drivers/common/idpf/base/idpf_osdep.h
+++ b/drivers/common/idpf/base/idpf_osdep.h
@@ -341,10 +341,16 @@ idpf_hweight32(u32 num)
#define LIST_ENTRY_TYPE(type) LIST_ENTRY(type)
#endif
+#ifndef LIST_FOREACH_SAFE
+#define LIST_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = LIST_FIRST((head)); \
+ (var) && ((tvar) = LIST_NEXT((var), field), 1); \
+ (var) = (tvar))
+#endif
+
#ifndef LIST_FOR_EACH_ENTRY_SAFE
#define LIST_FOR_EACH_ENTRY_SAFE(pos, temp, head, entry_type, list) \
- LIST_FOREACH(pos, head, list)
-
+ LIST_FOREACH_SAFE(pos, head, list, temp)
#endif
#ifndef LIST_FOR_EACH_ENTRY
diff --git a/drivers/common/idpf/idpf_common_device.c b/drivers/common/idpf/idpf_common_device.c
index cc4207a46e..77c58170b3 100644
--- a/drivers/common/idpf/idpf_common_device.c
+++ b/drivers/common/idpf/idpf_common_device.c
@@ -136,8 +136,7 @@ idpf_init_mbx(struct idpf_hw *hw)
if (ret != 0)
return ret;
- LIST_FOR_EACH_ENTRY_SAFE(ctlq, NULL, &hw->cq_list_head,
- struct idpf_ctlq_info, cq_list) {
+ LIST_FOR_EACH_ENTRY(ctlq, &hw->cq_list_head, struct idpf_ctlq_info, cq_list) {
if (ctlq->q_id == IDPF_CTLQ_ID &&
ctlq->cq_type == IDPF_CTLQ_TYPE_MAILBOX_TX)
hw->asq = ctlq;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.178396818 +0800
+++ 0016-common-idpf-fix-use-after-free-in-mailbox-init.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 4baf54ed9dc87b89ea2150578c51120bc0157bb0 Mon Sep 17 00:00:00 2001
+From 91f32226a7208deb90b1594cfeb769399b315687 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4baf54ed9dc87b89ea2150578c51120bc0157bb0 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index e042ef871c..cf9e553906 100644
+index 74a376cb13..581a36cc40 100644
@@ -50 +52 @@
-index 8403ed83f9..e9fa024850 100644
+index cc4207a46e..77c58170b3 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/bcmfs: fix free function mismatch' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (15 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
` (103 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Ajit Khaparde, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fd941582eaf51694100db70a67551375476093d2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fd941582eaf51694100db70a67551375476093d2 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:06 -0700
Subject: [PATCH] crypto/bcmfs: fix free function mismatch
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b1703af8e77d9e872e2ead92ab2dbcf290686f78 ]
The device structure is allocated with rte_malloc() and
then incorrectly freed with free().
This will lead to corrupt malloc pool.
Bugzilla ID: 1552
Fixes: c8e79da7c676 ("crypto/bcmfs: introduce BCMFS driver")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/crypto/bcmfs/bcmfs_device.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c
index ada7ba342c..46522970d5 100644
--- a/drivers/crypto/bcmfs/bcmfs_device.c
+++ b/drivers/crypto/bcmfs/bcmfs_device.c
@@ -139,7 +139,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev,
return fsdev;
cleanup:
- free(fsdev);
+ rte_free(fsdev);
return NULL;
}
@@ -163,7 +163,7 @@ fsdev_release(struct bcmfs_device *fsdev)
return;
TAILQ_REMOVE(&fsdev_list, fsdev, next);
- free(fsdev);
+ rte_free(fsdev);
}
static int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.212454218 +0800
+++ 0017-crypto-bcmfs-fix-free-function-mismatch.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From b1703af8e77d9e872e2ead92ab2dbcf290686f78 Mon Sep 17 00:00:00 2001
+From fd941582eaf51694100db70a67551375476093d2 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b1703af8e77d9e872e2ead92ab2dbcf290686f78 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'dma/idxd: fix free function mismatch in device probe' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (16 preceding siblings ...)
2024-11-11 6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
` (102 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Bruce Richardson, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ce1390c98af10b10a62004f28fa5ed90121fd760
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ce1390c98af10b10a62004f28fa5ed90121fd760 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:07 -0700
Subject: [PATCH] dma/idxd: fix free function mismatch in device probe
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 91b026fb46d987e68c1152b0bb5f0bc8f1f274db ]
The data structure is allocated with rte_malloc and incorrectly
freed in cleanup logic using free.
Bugzilla ID: 1549
Fixes: 9449330a8458 ("dma/idxd: create dmadev instances on PCI probe")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/dma/idxd/idxd_pci.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/dma/idxd/idxd_pci.c b/drivers/dma/idxd/idxd_pci.c
index 2ee78773bb..c314aee65c 100644
--- a/drivers/dma/idxd/idxd_pci.c
+++ b/drivers/dma/idxd/idxd_pci.c
@@ -300,7 +300,7 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_dmadev *idxd,
return nb_wqs;
err:
- free(pci);
+ rte_free(pci);
return err_code;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.251908317 +0800
+++ 0018-dma-idxd-fix-free-function-mismatch-in-device-probe.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From 91b026fb46d987e68c1152b0bb5f0bc8f1f274db Mon Sep 17 00:00:00 2001
+From ce1390c98af10b10a62004f28fa5ed90121fd760 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 91b026fb46d987e68c1152b0bb5f0bc8f1f274db ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 60ac219559..6ed03e96da 100644
+index 2ee78773bb..c314aee65c 100644
@@ -29 +31 @@
-@@ -301,7 +301,7 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_dmadev *idxd,
+@@ -300,7 +300,7 @@ init_pci_device(struct rte_pci_device *dev, struct idxd_dmadev *idxd,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'event/cnxk: fix free function mismatch in port config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (17 preceding siblings ...)
2024-11-11 6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
` (101 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Pavan Nikhilesh, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a3762499271d5541b0741fbceb9ccbdd5c36f557
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a3762499271d5541b0741fbceb9ccbdd5c36f557 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:08 -0700
Subject: [PATCH] event/cnxk: fix free function mismatch in port config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit db92f4e2ce491bb96605621cdd6f6251ea3bde85 ]
The code to cleanup in case of error would dereference null pointer
then pass that result to rte_free.
Fixes: 97a05c1fe634 ("event/cnxk: add port config")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/event/cnxk/cnxk_eventdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c
index 20f7f0d6df..f44d8fb377 100644
--- a/drivers/event/cnxk/cnxk_eventdev.c
+++ b/drivers/event/cnxk/cnxk_eventdev.c
@@ -118,8 +118,8 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
return 0;
hws_fini:
for (i = i - 1; i >= 0; i--) {
- event_dev->data->ports[i] = NULL;
rte_free(cnxk_sso_hws_get_cookie(event_dev->data->ports[i]));
+ event_dev->data->ports[i] = NULL;
}
return -ENOMEM;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.288408016 +0800
+++ 0019-event-cnxk-fix-free-function-mismatch-in-port-config.patch 2024-11-11 14:23:05.032192841 +0800
@@ -1 +1 @@
-From db92f4e2ce491bb96605621cdd6f6251ea3bde85 Mon Sep 17 00:00:00 2001
+From a3762499271d5541b0741fbceb9ccbdd5c36f557 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit db92f4e2ce491bb96605621cdd6f6251ea3bde85 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index c1df481827..84a55511a3 100644
+index 20f7f0d6df..f44d8fb377 100644
@@ -28 +30 @@
-@@ -121,8 +121,8 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
+@@ -118,8 +118,8 @@ cnxk_setup_event_ports(const struct rte_eventdev *event_dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cnxk: fix use after free in mempool create' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (18 preceding siblings ...)
2024-11-11 6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
` (100 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c033d1168d448b3e4bb2b991abaac255331ed056
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c033d1168d448b3e4bb2b991abaac255331ed056 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:10 -0700
Subject: [PATCH] net/cnxk: fix use after free in mempool create
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c024de17933128f37b1dfe38a0fae9975be1b104 ]
The driver would refer to the mempool object after it was freed.
Bugzilla ID: 1554
Fixes: 7ea187184a51 ("common/cnxk: support 1-N pool-aura per NIX LF")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/cnxk/cnxk_ethdev_sec.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_sec.c b/drivers/net/cnxk/cnxk_ethdev_sec.c
index b02dac4952..2cb2050faf 100644
--- a/drivers/net/cnxk/cnxk_ethdev_sec.c
+++ b/drivers/net/cnxk/cnxk_ethdev_sec.c
@@ -135,8 +135,8 @@ cnxk_nix_inl_custom_meta_pool_cb(uintptr_t pmpool, uintptr_t *mpool, const char
return -EINVAL;
}
- rte_mempool_free(hp);
plt_free(hp->pool_config);
+ rte_mempool_free(hp);
*aura_handle = 0;
*mpool = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.321414215 +0800
+++ 0020-net-cnxk-fix-use-after-free-in-mempool-create.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From c024de17933128f37b1dfe38a0fae9975be1b104 Mon Sep 17 00:00:00 2001
+From c033d1168d448b3e4bb2b991abaac255331ed056 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c024de17933128f37b1dfe38a0fae9975be1b104 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 6f5319e534..e428d2115d 100644
+index b02dac4952..2cb2050faf 100644
@@ -27 +29 @@
-@@ -136,8 +136,8 @@ cnxk_nix_inl_custom_meta_pool_cb(uintptr_t pmpool, uintptr_t *mpool, const char
+@@ -135,8 +135,8 @@ cnxk_nix_inl_custom_meta_pool_cb(uintptr_t pmpool, uintptr_t *mpool, const char
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cpfl: fix invalid free in JSON parser' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (19 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
` (99 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=55f413c5ad5c01b4149b239c2342c833d52f77c5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 55f413c5ad5c01b4149b239c2342c833d52f77c5 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:14 -0700
Subject: [PATCH] net/cpfl: fix invalid free in JSON parser
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1c20cf5be5c8b3e09673a44da2ce532ec0f35236 ]
With proper annotation, GCC discovers that this driver is calling
rte_free() on an object that was not allocated
(it is part of array in another object).
In function ‘cpfl_flow_js_mr_layout’,
inlined from ‘cpfl_flow_js_mr_action’ at
../drivers/net/cpfl/cpfl_flow_parser.c:848:9,
inlined from ‘cpfl_flow_js_mod_rule’ at
../drivers/net/cpfl/cpfl_flow_parser.c:908:9,
inlined from ‘cpfl_parser_init’ at
../drivers/net/cpfl/cpfl_flow_parser.c:932:8,
inlined from ‘cpfl_parser_create’ at
../drivers/net/cpfl/cpfl_flow_parser.c:959:8:
../drivers/net/cpfl/cpfl_flow_parser.c:740:9: warning:
‘rte_free’ called on pointer ‘*parser.modifications’ with
nonzero offset [28, 15479062120396] [-Wfree-nonheap-object]
740 | rte_free(js_mod->layout);
| ^~~~~~~~~~~~~~~~~~~~~~~~
Fixes: 6cc97c9971d7 ("net/cpfl: build action mapping rules from JSON")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/cpfl/cpfl_flow_parser.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 011229a470..303e979015 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -737,7 +737,6 @@ cpfl_flow_js_mr_layout(json_t *ob_layouts, struct cpfl_flow_js_mr_action_mod *js
return 0;
err:
- rte_free(js_mod->layout);
return -EINVAL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.355857615 +0800
+++ 0021-net-cpfl-fix-invalid-free-in-JSON-parser.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From 1c20cf5be5c8b3e09673a44da2ce532ec0f35236 Mon Sep 17 00:00:00 2001
+From 55f413c5ad5c01b4149b239c2342c833d52f77c5 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1c20cf5be5c8b3e09673a44da2ce532ec0f35236 ]
@@ -29 +31,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/e1000: fix use after free in filter flush' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (20 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
` (98 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cda329444d492b928d0e9f71c5e3bdfb9acc80e4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cda329444d492b928d0e9f71c5e3bdfb9acc80e4 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:12 -0700
Subject: [PATCH] net/e1000: fix use after free in filter flush
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 58196dc411576925a1d66b0da1d11b06072a7ac2 ]
The driver cleanup code was freeing the filter object then
dereferencing it.
Bugzilla ID: 1550
Fixes: 6a4d050e2855 ("net/igb: flush all the filter")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/e1000/igb_ethdev.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c
index d64a1aedd3..222e359ed9 100644
--- a/drivers/net/e1000/igb_ethdev.c
+++ b/drivers/net/e1000/igb_ethdev.c
@@ -3857,11 +3857,11 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev,
filter_info->twotuple_mask &= ~(1 << filter->index);
TAILQ_REMOVE(&filter_info->twotuple_list, filter, entries);
- rte_free(filter);
E1000_WRITE_REG(hw, E1000_TTQF(filter->index), E1000_TTQF_DISABLE_MASK);
E1000_WRITE_REG(hw, E1000_IMIR(filter->index), 0);
E1000_WRITE_REG(hw, E1000_IMIREXT(filter->index), 0);
+ rte_free(filter);
return 0;
}
@@ -4298,7 +4298,6 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
filter_info->fivetuple_mask &= ~(1 << filter->index);
TAILQ_REMOVE(&filter_info->fivetuple_list, filter, entries);
- rte_free(filter);
E1000_WRITE_REG(hw, E1000_FTQF(filter->index),
E1000_FTQF_VF_BP | E1000_FTQF_MASK);
@@ -4307,6 +4306,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
E1000_WRITE_REG(hw, E1000_SPQF(filter->index), 0);
E1000_WRITE_REG(hw, E1000_IMIR(filter->index), 0);
E1000_WRITE_REG(hw, E1000_IMIREXT(filter->index), 0);
+ rte_free(filter);
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.389249014 +0800
+++ 0022-net-e1000-fix-use-after-free-in-filter-flush.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From 58196dc411576925a1d66b0da1d11b06072a7ac2 Mon Sep 17 00:00:00 2001
+From cda329444d492b928d0e9f71c5e3bdfb9acc80e4 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 58196dc411576925a1d66b0da1d11b06072a7ac2 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 1e0a483d4a..d3a9181874 100644
+index d64a1aedd3..222e359ed9 100644
@@ -28 +30 @@
-@@ -3907,11 +3907,11 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev,
+@@ -3857,11 +3857,11 @@ igb_delete_2tuple_filter(struct rte_eth_dev *dev,
@@ -41 +43 @@
-@@ -4348,7 +4348,6 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
+@@ -4298,7 +4298,6 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
@@ -49 +51 @@
-@@ -4357,6 +4356,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
+@@ -4307,6 +4306,7 @@ igb_delete_5tuple_filter_82576(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: fix double free in flow destroy' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (21 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
` (97 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6560fe7e85a85543bd747377f9931aca3f325200
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6560fe7e85a85543bd747377f9931aca3f325200 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:15 -0700
Subject: [PATCH] net/nfp: fix double free in flow destroy
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fae5c633522efd30b6cb2c7a1bdfeb7e19e2f369 ]
Calling rte_free twice on same object will corrupt the heap.
Warning is:
In function 'nfp_pre_tun_table_check_del',
inlined from 'nfp_flow_destroy' at
../drivers/net/nfp/flower/nfp_flower_flow.c:5143:9:
../drivers/net/nfp/flower/nfp_flower_flow.c:3830:9:
error: pointer 'entry' used after 'rte_free'
[-Werror=use-after-free]
3830 | rte_free(entry);
| ^~~~~~~~~~~~~~~
../drivers/net/nfp/flower/nfp_flower_flow.c:3825:9:
note: call to 'rte_free' here
3825 | rte_free(entry);
| ^~~~~~~~~~~~~~~
Bugzilla ID: 1555
Fixes: d3c33bdf1f18 ("net/nfp: prepare for IPv4 UDP tunnel decap flow action")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/net/nfp/nfp_flow.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
index 91ebee5db4..13f58b210e 100644
--- a/drivers/net/nfp/nfp_flow.c
+++ b/drivers/net/nfp/nfp_flow.c
@@ -3177,7 +3177,6 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
goto free_entry;
}
- rte_free(entry);
rte_free(find_entry);
priv->pre_tun_cnt--;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.424343613 +0800
+++ 0023-net-nfp-fix-double-free-in-flow-destroy.patch 2024-11-11 14:23:05.042192841 +0800
@@ -1 +1 @@
-From fae5c633522efd30b6cb2c7a1bdfeb7e19e2f369 Mon Sep 17 00:00:00 2001
+From 6560fe7e85a85543bd747377f9931aca3f325200 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fae5c633522efd30b6cb2c7a1bdfeb7e19e2f369 ]
@@ -26 +28,0 @@
-Cc: stable@dpdk.org
@@ -33 +35 @@
- drivers/net/nfp/flower/nfp_flower_flow.c | 1 -
+ drivers/net/nfp/nfp_flow.c | 1 -
@@ -36,5 +38,5 @@
-diff --git a/drivers/net/nfp/flower/nfp_flower_flow.c b/drivers/net/nfp/flower/nfp_flower_flow.c
-index 0078455658..64a0062c8b 100644
---- a/drivers/net/nfp/flower/nfp_flower_flow.c
-+++ b/drivers/net/nfp/flower/nfp_flower_flow.c
-@@ -3822,7 +3822,6 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
+diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c
+index 91ebee5db4..13f58b210e 100644
+--- a/drivers/net/nfp/nfp_flow.c
++++ b/drivers/net/nfp/nfp_flow.c
+@@ -3177,7 +3177,6 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/sfc: fix use after free in debug logs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (22 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
` (96 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Ivan Malov, Andrew Rybchenko, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5a5a90d8fd00b0afb5d50081df1081412c601514
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5a5a90d8fd00b0afb5d50081df1081412c601514 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:13 -0700
Subject: [PATCH] net/sfc: fix use after free in debug logs
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 757b0b6f207c072a550f43836856235aa41553ad ]
If compiler detection of use-after-free is enabled then this drivers
debug messages will cause warnings. Change to move debug message
before the object is freed.
Bugzilla ID: 1551
Fixes: 55c1238246d5 ("net/sfc: add more debug messages to transfer flows")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Ivan Malov <ivan.malov@arknetworks.am>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
| 4 ++--
drivers/net/sfc/sfc_mae.c | 23 +++++++++--------------
2 files changed, 11 insertions(+), 16 deletions(-)
--git a/drivers/net/sfc/sfc_flow_rss.c b/drivers/net/sfc/sfc_flow_rss.c
index e28c943335..8e2749833b 100644
--- a/drivers/net/sfc/sfc_flow_rss.c
+++ b/drivers/net/sfc/sfc_flow_rss.c
@@ -303,9 +303,9 @@ sfc_flow_rss_ctx_del(struct sfc_adapter *sa, struct sfc_flow_rss_ctx *ctx)
TAILQ_REMOVE(&flow_rss->ctx_list, ctx, entries);
rte_free(ctx->qid_offsets);
- rte_free(ctx);
-
sfc_dbg(sa, "flow-rss: deleted ctx=%p", ctx);
+
+ rte_free(ctx);
}
static int
diff --git a/drivers/net/sfc/sfc_mae.c b/drivers/net/sfc/sfc_mae.c
index 60ff6d2181..8f74f10390 100644
--- a/drivers/net/sfc/sfc_mae.c
+++ b/drivers/net/sfc/sfc_mae.c
@@ -400,9 +400,8 @@ sfc_mae_outer_rule_del(struct sfc_adapter *sa,
efx_mae_match_spec_fini(sa->nic, rule->match_spec);
TAILQ_REMOVE(&mae->outer_rules, rule, entries);
- rte_free(rule);
-
sfc_dbg(sa, "deleted outer_rule=%p", rule);
+ rte_free(rule);
}
static int
@@ -585,9 +584,8 @@ sfc_mae_mac_addr_del(struct sfc_adapter *sa, struct sfc_mae_mac_addr *mac_addr)
}
TAILQ_REMOVE(&mae->mac_addrs, mac_addr, entries);
- rte_free(mac_addr);
-
sfc_dbg(sa, "deleted mac_addr=%p", mac_addr);
+ rte_free(mac_addr);
}
enum sfc_mae_mac_addr_type {
@@ -785,10 +783,10 @@ sfc_mae_encap_header_del(struct sfc_adapter *sa,
}
TAILQ_REMOVE(&mae->encap_headers, encap_header, entries);
+ sfc_dbg(sa, "deleted encap_header=%p", encap_header);
+
rte_free(encap_header->buf);
rte_free(encap_header);
-
- sfc_dbg(sa, "deleted encap_header=%p", encap_header);
}
static int
@@ -983,9 +981,8 @@ sfc_mae_counter_del(struct sfc_adapter *sa, struct sfc_mae_counter *counter)
}
TAILQ_REMOVE(&mae->counters, counter, entries);
- rte_free(counter);
-
sfc_dbg(sa, "deleted counter=%p", counter);
+ rte_free(counter);
}
static int
@@ -1165,9 +1162,8 @@ sfc_mae_action_set_del(struct sfc_adapter *sa,
sfc_mae_mac_addr_del(sa, action_set->src_mac_addr);
sfc_mae_counter_del(sa, action_set->counter);
TAILQ_REMOVE(&mae->action_sets, action_set, entries);
- rte_free(action_set);
-
sfc_dbg(sa, "deleted action_set=%p", action_set);
+ rte_free(action_set);
}
static int
@@ -1401,10 +1397,10 @@ sfc_mae_action_set_list_del(struct sfc_adapter *sa,
sfc_mae_action_set_del(sa, action_set_list->action_sets[i]);
TAILQ_REMOVE(&mae->action_set_lists, action_set_list, entries);
+ sfc_dbg(sa, "deleted action_set_list=%p", action_set_list);
+
rte_free(action_set_list->action_sets);
rte_free(action_set_list);
-
- sfc_dbg(sa, "deleted action_set_list=%p", action_set_list);
}
static int
@@ -1667,9 +1663,8 @@ sfc_mae_action_rule_del(struct sfc_adapter *sa,
sfc_mae_outer_rule_del(sa, rule->outer_rule);
TAILQ_REMOVE(&mae->action_rules, rule, entries);
- rte_free(rule);
-
sfc_dbg(sa, "deleted action_rule=%p", rule);
+ rte_free(rule);
}
static int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.465646112 +0800
+++ 0024-net-sfc-fix-use-after-free-in-debug-logs.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From 757b0b6f207c072a550f43836856235aa41553ad Mon Sep 17 00:00:00 2001
+From 5a5a90d8fd00b0afb5d50081df1081412c601514 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 757b0b6f207c072a550f43836856235aa41553ad ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'raw/ifpga/base: fix use after free' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (23 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
` (95 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=be5b4c9d2969486ffc3a8606c0b8b2cd77a0169e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From be5b4c9d2969486ffc3a8606c0b8b2cd77a0169e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:16 -0700
Subject: [PATCH] raw/ifpga/base: fix use after free
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 11986223b54d981300e9de2d365c494eb274645c ]
The TAILQ_FOREACH() macro would refer to info after it
had been freed. Fix by introducing TAILQ_FOREACH_SAFE here.
Fixes: 4a19f89104f8 ("raw/ifpga/base: support multiple cards")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/raw/ifpga/base/opae_intel_max10.c | 11 +++++++++--
1 file changed, 9 insertions(+), 2 deletions(-)
diff --git a/drivers/raw/ifpga/base/opae_intel_max10.c b/drivers/raw/ifpga/base/opae_intel_max10.c
index dd97a5f9fd..d5a9ceb6e3 100644
--- a/drivers/raw/ifpga/base/opae_intel_max10.c
+++ b/drivers/raw/ifpga/base/opae_intel_max10.c
@@ -6,6 +6,13 @@
#include <libfdt.h>
#include "opae_osdep.h"
+#ifndef TAILQ_FOREACH_SAFE
+#define TAILQ_FOREACH_SAFE(var, head, field, tvar) \
+ for ((var) = TAILQ_FIRST((head)); \
+ (var) && ((tvar) = TAILQ_NEXT((var), field), 1); \
+ (var) = (tvar))
+#endif
+
int max10_sys_read(struct intel_max10_device *dev,
unsigned int offset, unsigned int *val)
{
@@ -746,9 +753,9 @@ static int fdt_get_named_reg(const void *fdt, int node, const char *name,
static void max10_sensor_uinit(struct intel_max10_device *dev)
{
- struct opae_sensor_info *info;
+ struct opae_sensor_info *info, *next;
- TAILQ_FOREACH(info, &dev->opae_sensor_list, node) {
+ TAILQ_FOREACH_SAFE(info, &dev->opae_sensor_list, node, next) {
TAILQ_REMOVE(&dev->opae_sensor_list, info, node);
opae_free(info);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.515775011 +0800
+++ 0025-raw-ifpga-base-fix-use-after-free.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From 11986223b54d981300e9de2d365c494eb274645c Mon Sep 17 00:00:00 2001
+From be5b4c9d2969486ffc3a8606c0b8b2cd77a0169e Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 11986223b54d981300e9de2d365c494eb274645c ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'raw/ifpga: fix free function mismatch in interrupt config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (24 preceding siblings ...)
2024-11-11 6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
` (94 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Morten Brørup, Konstantin Ananyev,
Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a2cef42f63ee851b84e63187f31783b9f032af5f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a2cef42f63ee851b84e63187f31783b9f032af5f Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:18 -0700
Subject: [PATCH] raw/ifpga: fix free function mismatch in interrupt config
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d891a597895bb65db42404440660f82092780750 ]
The raw ifpga driver redefines malloc to be opae_malloc
and free to be opae_free; which is a bad idea.
This leads to case where interrupt efd array is allocated with calloc()
and then passed to rte_free.
The workaround is to allocate the array with rte_calloc() instead.
Fixes: d61138d4f0e2 ("drivers: remove direct access to interrupt handle")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
drivers/raw/ifpga/ifpga_rawdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/raw/ifpga/ifpga_rawdev.c b/drivers/raw/ifpga/ifpga_rawdev.c
index 997fbf8a0d..3b4d771d1b 100644
--- a/drivers/raw/ifpga/ifpga_rawdev.c
+++ b/drivers/raw/ifpga/ifpga_rawdev.c
@@ -1498,7 +1498,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
nb_intr = rte_intr_nb_intr_get(*intr_handle);
- intr_efds = calloc(nb_intr, sizeof(int));
+ intr_efds = rte_calloc("ifpga_efds", nb_intr, sizeof(int), 0);
if (!intr_efds)
return -ENOMEM;
@@ -1507,7 +1507,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
ret = opae_acc_set_irq(acc, vec_start, count, intr_efds);
if (ret) {
- free(intr_efds);
+ rte_free(intr_efds);
return -EINVAL;
}
}
@@ -1516,13 +1516,13 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
ret = rte_intr_callback_register(*intr_handle,
handler, (void *)arg);
if (ret) {
- free(intr_efds);
+ rte_free(intr_efds);
return -EINVAL;
}
IFPGA_RAWDEV_PMD_INFO("success register %s interrupt", name);
- free(intr_efds);
+ rte_free(intr_efds);
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.555587211 +0800
+++ 0026-raw-ifpga-fix-free-function-mismatch-in-interrupt-co.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From d891a597895bb65db42404440660f82092780750 Mon Sep 17 00:00:00 2001
+From a2cef42f63ee851b84e63187f31783b9f032af5f Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d891a597895bb65db42404440660f82092780750 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 113a22b0a7..5b9b596435 100644
+index 997fbf8a0d..3b4d771d1b 100644
@@ -31 +33 @@
-@@ -1499,7 +1499,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1498,7 +1498,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -40 +42 @@
-@@ -1508,7 +1508,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1507,7 +1507,7 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
@@ -49 +51 @@
-@@ -1517,13 +1517,13 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
+@@ -1516,13 +1516,13 @@ ifpga_register_msix_irq(struct ifpga_rawdev *dev, int port_id,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/vhost: fix free function mismatch' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (25 preceding siblings ...)
2024-11-11 6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
` (93 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Chengwen Feng, Chenbo Xia, Morten Brørup,
Konstantin Ananyev, Wathsala Vithanage, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=780c6918ffeccf266bf5505dc2b315a348d60d1e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 780c6918ffeccf266bf5505dc2b315a348d60d1e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 8 Oct 2024 09:47:09 -0700
Subject: [PATCH] examples/vhost: fix free function mismatch
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ae67f7d0256687fdfb24d27ee94b20d88c65108e ]
The pointer bdev is allocated with rte_zmalloc() and then
incorrectly freed with free() which will lead to pool corruption.
Bugzilla ID: 1553
Fixes: c19beb3f38cd ("examples/vhost_blk: introduce vhost storage sample")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
---
examples/vhost_blk/vhost_blk.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/examples/vhost_blk/vhost_blk.c b/examples/vhost_blk/vhost_blk.c
index 376f7b89a7..4dc99eb648 100644
--- a/examples/vhost_blk/vhost_blk.c
+++ b/examples/vhost_blk/vhost_blk.c
@@ -776,7 +776,7 @@ vhost_blk_bdev_construct(const char *bdev_name,
bdev->data = rte_zmalloc(NULL, blk_cnt * blk_size, 0);
if (!bdev->data) {
fprintf(stderr, "No enough reserved huge memory for disk\n");
- free(bdev);
+ rte_free(bdev);
return NULL;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.589520410 +0800
+++ 0027-examples-vhost-fix-free-function-mismatch.patch 2024-11-11 14:23:05.052192841 +0800
@@ -1 +1 @@
-From ae67f7d0256687fdfb24d27ee94b20d88c65108e Mon Sep 17 00:00:00 2001
+From 780c6918ffeccf266bf5505dc2b315a348d60d1e Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ae67f7d0256687fdfb24d27ee94b20d88c65108e ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 03f1ac9c3f..9c9e326949 100644
+index 376f7b89a7..4dc99eb648 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfb: fix use after free' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (26 preceding siblings ...)
2024-11-11 6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'power: enable CPPC' " Xueming Li
` (92 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: xuemingl, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5facb377a447b0150f17cf19b1d2ab006f721a03
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5facb377a447b0150f17cf19b1d2ab006f721a03 Mon Sep 17 00:00:00 2001
From: Thomas Monjalon <thomas@monjalon.net>
Date: Thu, 10 Oct 2024 19:11:07 +0200
Subject: [PATCH] net/nfb: fix use after free
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 76da9834ebb6e43e005bd5895ff4568d0e7be78f ]
With the annotations added to the allocation functions
in commit 80da7efbb4c4 ("eal: annotate allocation functions"),
more issues are detected at compilation time:
nfb_rx.c:133:28: error: pointer 'rxq' used after 'rte_free'
It is fixed by moving the assignment before freeing the parent pointer.
Fixes: 6435f9a0ac22 ("net/nfb: add new netcope driver")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
drivers/net/nfb/nfb_rx.c | 2 +-
drivers/net/nfb/nfb_tx.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/nfb/nfb_rx.c b/drivers/net/nfb/nfb_rx.c
index 8a9b232305..7941197b77 100644
--- a/drivers/net/nfb/nfb_rx.c
+++ b/drivers/net/nfb/nfb_rx.c
@@ -129,7 +129,7 @@ nfb_eth_rx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (rxq->queue != NULL) {
ndp_close_rx_queue(rxq->queue);
- rte_free(rxq);
rxq->queue = NULL;
+ rte_free(rxq);
}
}
diff --git a/drivers/net/nfb/nfb_tx.c b/drivers/net/nfb/nfb_tx.c
index d49fc324e7..5c38d69934 100644
--- a/drivers/net/nfb/nfb_tx.c
+++ b/drivers/net/nfb/nfb_tx.c
@@ -108,7 +108,7 @@ nfb_eth_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (txq->queue != NULL) {
ndp_close_tx_queue(txq->queue);
- rte_free(txq);
txq->queue = NULL;
+ rte_free(txq);
}
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.621227709 +0800
+++ 0028-net-nfb-fix-use-after-free.patch 2024-11-11 14:23:05.062192841 +0800
@@ -1 +1 @@
-From 76da9834ebb6e43e005bd5895ff4568d0e7be78f Mon Sep 17 00:00:00 2001
+From 5facb377a447b0150f17cf19b1d2ab006f721a03 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 76da9834ebb6e43e005bd5895ff4568d0e7be78f ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index f72afafe8f..462bc3b50d 100644
+index 8a9b232305..7941197b77 100644
@@ -38 +40 @@
-index a1318a4205..cf99268c43 100644
+index d49fc324e7..5c38d69934 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'power: enable CPPC' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (27 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
` (91 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Wathsala Vithanage; +Cc: xuemingl, Dhruv Tripathi, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=418efc7dd043b02b6666fe70b88613dd8984bb98
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 418efc7dd043b02b6666fe70b88613dd8984bb98 Mon Sep 17 00:00:00 2001
From: Wathsala Vithanage <wathsala.vithanage@arm.com>
Date: Thu, 10 Oct 2024 14:17:36 +0000
Subject: [PATCH] power: enable CPPC
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 35220c7cb3aff022b3a41919139496326ef6eecc ]
Power library already supports Linux CPPC driver,
but initialization was failing.
Enable its use in the drivers check,
and fix the name of the CPPC driver name.
Fixes: ef1cc88f1837 ("power: support cppc_cpufreq driver")
Signed-off-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
Reviewed-by: Dhruv Tripathi <dhruv.tripathi@arm.com>
---
lib/power/power_cppc_cpufreq.c | 2 +-
lib/power/rte_power_pmd_mgmt.c | 11 ++++++-----
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c
index bb70f6ae52..f2ba684c83 100644
--- a/lib/power/power_cppc_cpufreq.c
+++ b/lib/power/power_cppc_cpufreq.c
@@ -36,7 +36,7 @@
#define POWER_SYSFILE_SYS_MAX \
"/sys/devices/system/cpu/cpu%u/cpufreq/cpuinfo_max_freq"
-#define POWER_CPPC_DRIVER "cppc-cpufreq"
+#define POWER_CPPC_DRIVER "cppc_cpufreq"
#define BUS_FREQ 100000
enum power_state {
diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c
index 6f18ed0adf..20aa753c3a 100644
--- a/lib/power/rte_power_pmd_mgmt.c
+++ b/lib/power/rte_power_pmd_mgmt.c
@@ -419,11 +419,12 @@ check_scale(unsigned int lcore)
{
enum power_management_env env;
- /* only PSTATE and ACPI modes are supported */
+ /* only PSTATE, AMD-PSTATE, ACPI and CPPC modes are supported */
if (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) &&
!rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ) &&
- !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ)) {
- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n");
+ !rte_power_check_env_supported(PM_ENV_AMD_PSTATE_CPUFREQ) &&
+ !rte_power_check_env_supported(PM_ENV_CPPC_CPUFREQ)) {
+ RTE_LOG(DEBUG, POWER, "Only ACPI, PSTATE, AMD-PSTATE, or CPPC modes are supported\n");
return -ENOTSUP;
}
/* ensure we could initialize the power library */
@@ -433,8 +434,8 @@ check_scale(unsigned int lcore)
/* ensure we initialized the correct env */
env = rte_power_get_env();
if (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ &&
- env != PM_ENV_AMD_PSTATE_CPUFREQ) {
- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n");
+ env != PM_ENV_AMD_PSTATE_CPUFREQ && env != PM_ENV_CPPC_CPUFREQ) {
+ RTE_LOG(DEBUG, POWER, "Unable to initialize ACPI, PSTATE, AMD-PSTATE, or CPPC modes\n");
return -ENOTSUP;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.652309809 +0800
+++ 0029-power-enable-CPPC.patch 2024-11-11 14:23:05.062192841 +0800
@@ -1 +1 @@
-From 35220c7cb3aff022b3a41919139496326ef6eecc Mon Sep 17 00:00:00 2001
+From 418efc7dd043b02b6666fe70b88613dd8984bb98 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 35220c7cb3aff022b3a41919139496326ef6eecc ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 32aaacb948..e68b39b424 100644
+index bb70f6ae52..f2ba684c83 100644
@@ -35 +37 @@
-index b1c18a5f56..830a6c7a97 100644
+index 6f18ed0adf..20aa753c3a 100644
@@ -47 +49 @@
-- POWER_LOG(DEBUG, "Neither ACPI nor PSTATE modes are supported");
+- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes are supported\n");
@@ -50 +52 @@
-+ POWER_LOG(DEBUG, "Only ACPI, PSTATE, AMD-PSTATE, or CPPC modes are supported");
++ RTE_LOG(DEBUG, POWER, "Only ACPI, PSTATE, AMD-PSTATE, or CPPC modes are supported\n");
@@ -59 +61 @@
-- POWER_LOG(DEBUG, "Neither ACPI nor PSTATE modes were initialized");
+- RTE_LOG(DEBUG, POWER, "Neither ACPI nor PSTATE modes were initialized\n");
@@ -61 +63 @@
-+ POWER_LOG(DEBUG, "Unable to initialize ACPI, PSTATE, AMD-PSTATE, or CPPC modes");
++ RTE_LOG(DEBUG, POWER, "Unable to initialize ACPI, PSTATE, AMD-PSTATE, or CPPC modes\n");
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'fib6: add runtime checks in AVX512 lookup' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (28 preceding siblings ...)
2024-11-11 6:27 ` patch 'power: enable CPPC' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
` (90 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: xuemingl, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f2905ef63cc3d39bb2d3bd6156613ee5c4479e1f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f2905ef63cc3d39bb2d3bd6156613ee5c4479e1f Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Tue, 8 Oct 2024 17:31:36 +0000
Subject: [PATCH] fib6: add runtime checks in AVX512 lookup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 45ddc5660f9830f3b7b39ddaf57af02e80d589a4 ]
AVX512 lookup function requires CPU to support RTE_CPUFLAG_AVX512DQ and
RTE_CPUFLAG_AVX512BW. Add runtime checks of these two flags when deciding
if vector function can be used.
Fixes: 1e5630e40d95 ("fib6: add AVX512 lookup")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
---
lib/fib/trie.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/lib/fib/trie.c b/lib/fib/trie.c
index 09470e7287..7b33cdaa7b 100644
--- a/lib/fib/trie.c
+++ b/lib/fib/trie.c
@@ -46,8 +46,10 @@ static inline rte_fib6_lookup_fn_t
get_vector_fn(enum rte_fib_trie_nh_sz nh_sz)
{
#ifdef CC_TRIE_AVX512_SUPPORT
- if ((rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0) ||
- (rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512))
+ if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F) <= 0 ||
+ rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512DQ) <= 0 ||
+ rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512BW) <= 0 ||
+ rte_vect_get_max_simd_bitwidth() < RTE_VECT_SIMD_512)
return NULL;
switch (nh_sz) {
case RTE_FIB6_TRIE_2B:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.699294108 +0800
+++ 0030-fib6-add-runtime-checks-in-AVX512-lookup.patch 2024-11-11 14:23:05.072192841 +0800
@@ -1 +1 @@
-From 45ddc5660f9830f3b7b39ddaf57af02e80d589a4 Mon Sep 17 00:00:00 2001
+From f2905ef63cc3d39bb2d3bd6156613ee5c4479e1f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 45ddc5660f9830f3b7b39ddaf57af02e80d589a4 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'pcapng: fix handling of chained mbufs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (29 preceding siblings ...)
2024-11-11 6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
` (89 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Oleksandr Nahnybida; +Cc: xuemingl, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e989eae1c9a4ec9a7fdf8014a58cdd0a241a836f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e989eae1c9a4ec9a7fdf8014a58cdd0a241a836f Mon Sep 17 00:00:00 2001
From: Oleksandr Nahnybida <oleksandrn@interfacemasters.com>
Date: Fri, 13 Sep 2024 15:34:03 +0300
Subject: [PATCH] pcapng: fix handling of chained mbufs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6db358536fee7891b5cb670df94ec87543ddd0fb ]
The pcapng generates corrupted files when dealing with chained mbufs.
This issue arises because in rte_pcapng_copy the length of the EPB block
is incorrectly calculated using the data_len of the first mbuf instead
of the pkt_len, despite that rte_pcapng_write_packets correctly writing
the mbuf chain to disk.
This fix ensures that the block length is calculated based on the pkt_len,
aligning it with the actual data written to disk.
Fixes: 8d23ce8f5ee9 ("pcapng: add new library for writing pcapng files")
Signed-off-by: Oleksandr Nahnybida <oleksandrn@interfacemasters.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
.mailmap | 1 +
app/test/test_pcapng.c | 12 ++++++++++--
lib/pcapng/rte_pcapng.c | 12 ++++++------
3 files changed, 17 insertions(+), 8 deletions(-)
diff --git a/.mailmap b/.mailmap
index 8b9e849d05..4022645615 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1059,6 +1059,7 @@ Odi Assli <odia@nvidia.com>
Ognjen Joldzic <ognjen.joldzic@gmail.com>
Ola Liljedahl <ola.liljedahl@arm.com>
Oleg Polyakov <olegp123@walla.co.il>
+Oleksandr Nahnybida <oleksandrn@interfacemasters.com>
Olga Shern <olgas@nvidia.com> <olgas@mellanox.com>
Olivier Gournet <ogournet@corp.free.fr>
Olivier Matz <olivier.matz@6wind.com>
diff --git a/app/test/test_pcapng.c b/app/test/test_pcapng.c
index 89535efad0..5cdde0542a 100644
--- a/app/test/test_pcapng.c
+++ b/app/test/test_pcapng.c
@@ -102,6 +102,14 @@ mbuf1_prepare(struct dummy_mbuf *dm, uint32_t plen)
pkt.udp.dgram_len = rte_cpu_to_be_16(plen);
memcpy(rte_pktmbuf_mtod(dm->mb, void *), &pkt, sizeof(pkt));
+
+ /* Idea here is to create mbuf chain big enough that after mbuf deep copy they won't be
+ * compressed into single mbuf to properly test store of chained mbufs
+ */
+ dummy_mbuf_prep(&dm->mb[1], dm->buf[1], sizeof(dm->buf[1]), pkt_len);
+ dummy_mbuf_prep(&dm->mb[2], dm->buf[2], sizeof(dm->buf[2]), pkt_len);
+ rte_pktmbuf_chain(&dm->mb[0], &dm->mb[1]);
+ rte_pktmbuf_chain(&dm->mb[0], &dm->mb[2]);
}
static int
@@ -117,7 +125,7 @@ test_setup(void)
/* Make a pool for cloned packets */
mp = rte_pktmbuf_pool_create_by_ops("pcapng_test_pool",
- MAX_BURST, 0, 0,
+ MAX_BURST * 32, 0, 0,
rte_pcapng_mbuf_size(pkt_len) + 128,
SOCKET_ID_ANY, "ring_mp_sc");
if (mp == NULL) {
@@ -155,7 +163,7 @@ fill_pcapng_file(rte_pcapng_t *pcapng, unsigned int num_packets)
for (i = 0; i < burst_size; i++) {
struct rte_mbuf *mc;
- mc = rte_pcapng_copy(port_id, 0, orig, mp, pkt_len,
+ mc = rte_pcapng_copy(port_id, 0, orig, mp, rte_pktmbuf_pkt_len(orig),
RTE_PCAPNG_DIRECTION_IN, NULL);
if (mc == NULL) {
fprintf(stderr, "Cannot copy packet\n");
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index 7254defce7..e5326c1d38 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -475,7 +475,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
const char *comment)
{
struct pcapng_enhance_packet_block *epb;
- uint32_t orig_len, data_len, padding, flags;
+ uint32_t orig_len, pkt_len, padding, flags;
struct pcapng_option *opt;
uint64_t timestamp;
uint16_t optlen;
@@ -516,8 +516,8 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
(md->ol_flags & RTE_MBUF_F_RX_RSS_HASH));
/* pad the packet to 32 bit boundary */
- data_len = rte_pktmbuf_data_len(mc);
- padding = RTE_ALIGN(data_len, sizeof(uint32_t)) - data_len;
+ pkt_len = rte_pktmbuf_pkt_len(mc);
+ padding = RTE_ALIGN(pkt_len, sizeof(uint32_t)) - pkt_len;
if (padding > 0) {
void *tail = rte_pktmbuf_append(mc, padding);
@@ -584,7 +584,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
goto fail;
epb->block_type = PCAPNG_ENHANCED_PACKET_BLOCK;
- epb->block_length = rte_pktmbuf_data_len(mc);
+ epb->block_length = rte_pktmbuf_pkt_len(mc);
/* Interface index is filled in later during write */
mc->port = port_id;
@@ -593,7 +593,7 @@ rte_pcapng_copy(uint16_t port_id, uint32_t queue,
timestamp = rte_get_tsc_cycles();
epb->timestamp_hi = timestamp >> 32;
epb->timestamp_lo = (uint32_t)timestamp;
- epb->capture_length = data_len;
+ epb->capture_length = pkt_len;
epb->original_length = orig_len;
/* set trailer of block length */
@@ -623,7 +623,7 @@ rte_pcapng_write_packets(rte_pcapng_t *self,
/* sanity check that is really a pcapng mbuf */
epb = rte_pktmbuf_mtod(m, struct pcapng_enhance_packet_block *);
if (unlikely(epb->block_type != PCAPNG_ENHANCED_PACKET_BLOCK ||
- epb->block_length != rte_pktmbuf_data_len(m))) {
+ epb->block_length != rte_pktmbuf_pkt_len(m))) {
rte_errno = EINVAL;
return -1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.730371307 +0800
+++ 0031-pcapng-fix-handling-of-chained-mbufs.patch 2024-11-11 14:23:05.072192841 +0800
@@ -1 +1 @@
-From 6db358536fee7891b5cb670df94ec87543ddd0fb Mon Sep 17 00:00:00 2001
+From e989eae1c9a4ec9a7fdf8014a58cdd0a241a836f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6db358536fee7891b5cb670df94ec87543ddd0fb ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 62ef194168..aee7c91780 100644
+index 8b9e849d05..4022645615 100644
@@ -30 +32,2 @@
-@@ -1091,6 +1091,7 @@ Ognjen Joldzic <ognjen.joldzic@gmail.com>
+@@ -1059,6 +1059,7 @@ Odi Assli <odia@nvidia.com>
+ Ognjen Joldzic <ognjen.joldzic@gmail.com>
@@ -33 +35,0 @@
- Oleksandr Kolomeiets <okl-plv@napatech.com>
@@ -39 +41 @@
-index 2665b08c76..b219873c3a 100644
+index 89535efad0..5cdde0542a 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/dumpcap: fix handling of jumbo frames' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (30 preceding siblings ...)
2024-11-11 6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
` (88 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Tianli Lai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=09c32b20ec3718838ed7e0e089dc2e7104770e40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 09c32b20ec3718838ed7e0e089dc2e7104770e40 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 3 Oct 2024 15:09:03 -0700
Subject: [PATCH] app/dumpcap: fix handling of jumbo frames
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5c0f970c0d0e2a963a7a970a71cad4f4244414a5 ]
If dumpcap (in legacy pcap mode) tried to handle a large segmented
frame it would core dump because rte_pktmbuf_read() would return NULL.
Fix by using same logic as in pcap PMD.
Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
Reported-by: Tianli Lai <laitianli@tom.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/dumpcap/main.c | 15 ++++++++++-----
1 file changed, 10 insertions(+), 5 deletions(-)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 76c7475114..213e764c2e 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -874,7 +874,7 @@ static ssize_t
pcap_write_packets(pcap_dumper_t *dumper,
struct rte_mbuf *pkts[], uint16_t n)
{
- uint8_t temp_data[RTE_MBUF_DEFAULT_BUF_SIZE];
+ uint8_t temp_data[RTE_ETHER_MAX_JUMBO_FRAME_LEN];
struct pcap_pkthdr header;
uint16_t i;
size_t total = 0;
@@ -883,14 +883,19 @@ pcap_write_packets(pcap_dumper_t *dumper,
for (i = 0; i < n; i++) {
struct rte_mbuf *m = pkts[i];
+ size_t len, caplen;
- header.len = rte_pktmbuf_pkt_len(m);
- header.caplen = RTE_MIN(header.len, sizeof(temp_data));
+ len = caplen = rte_pktmbuf_pkt_len(m);
+ if (unlikely(!rte_pktmbuf_is_contiguous(m) && len > sizeof(temp_data)))
+ caplen = sizeof(temp_data);
+
+ header.len = len;
+ header.caplen = caplen;
pcap_dump((u_char *)dumper, &header,
- rte_pktmbuf_read(m, 0, header.caplen, temp_data));
+ rte_pktmbuf_read(m, 0, caplen, temp_data));
- total += sizeof(header) + header.len;
+ total += sizeof(header) + caplen;
}
return total;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.767246206 +0800
+++ 0032-app-dumpcap-fix-handling-of-jumbo-frames.patch 2024-11-11 14:23:05.082192840 +0800
@@ -1 +1 @@
-From 5c0f970c0d0e2a963a7a970a71cad4f4244414a5 Mon Sep 17 00:00:00 2001
+From 09c32b20ec3718838ed7e0e089dc2e7104770e40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5c0f970c0d0e2a963a7a970a71cad4f4244414a5 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 6feb8f5672..fcfaa19951 100644
+index 76c7475114..213e764c2e 100644
@@ -23 +25 @@
-@@ -902,7 +902,7 @@ static ssize_t
+@@ -874,7 +874,7 @@ static ssize_t
@@ -32 +34 @@
-@@ -911,14 +911,19 @@ pcap_write_packets(pcap_dumper_t *dumper,
+@@ -883,14 +883,19 @@ pcap_write_packets(pcap_dumper_t *dumper,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'ml/cnxk: fix handling of TVM model I/O' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (31 preceding siblings ...)
2024-11-11 6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
` (87 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Srikanth Yalavarthi; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e8a44520dc4a80da0f3d64f4fb9900820c783893
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e8a44520dc4a80da0f3d64f4fb9900820c783893 Mon Sep 17 00:00:00 2001
From: Srikanth Yalavarthi <syalavarthi@marvell.com>
Date: Tue, 30 Jul 2024 22:41:03 -0700
Subject: [PATCH] ml/cnxk: fix handling of TVM model I/O
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c4636d36bc2cc3a370200245da69006d6f5d9852 ]
Fixed incorrect handling of TVM models with single MRVL
layer. Set the I/O layout to packed and fixed calculation
of quantized and dequantized data buffer addresses.
Fixes: 5cea2c67edfc ("ml/cnxk: update internal TVM model info structure")
Fixes: df2358f3adce ("ml/cnxk: add structures for TVM model type")
Signed-off-by: Srikanth Yalavarthi <syalavarthi@marvell.com>
---
drivers/ml/cnxk/cnxk_ml_ops.c | 12 ++++++++----
drivers/ml/cnxk/mvtvm_ml_model.c | 2 +-
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/drivers/ml/cnxk/cnxk_ml_ops.c b/drivers/ml/cnxk/cnxk_ml_ops.c
index 7bd73727e1..8863633155 100644
--- a/drivers/ml/cnxk/cnxk_ml_ops.c
+++ b/drivers/ml/cnxk/cnxk_ml_ops.c
@@ -1462,7 +1462,8 @@ cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buf
d_offset = 0;
q_offset = 0;
for (i = 0; i < info->nb_inputs; i++) {
- if (model->type == ML_CNXK_MODEL_TYPE_TVM) {
+ if (model->type == ML_CNXK_MODEL_TYPE_TVM &&
+ model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) {
lcl_dbuffer = dbuffer[i]->addr;
lcl_qbuffer = qbuffer[i]->addr;
} else {
@@ -1474,7 +1475,8 @@ cnxk_ml_io_quantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_buf
if (ret < 0)
return ret;
- if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
+ if ((model->type == ML_CNXK_MODEL_TYPE_GLOW) ||
+ (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) {
d_offset += info->input[i].sz_d;
q_offset += info->input[i].sz_q;
}
@@ -1516,7 +1518,8 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b
q_offset = 0;
d_offset = 0;
for (i = 0; i < info->nb_outputs; i++) {
- if (model->type == ML_CNXK_MODEL_TYPE_TVM) {
+ if (model->type == ML_CNXK_MODEL_TYPE_TVM &&
+ model->subtype != ML_CNXK_MODEL_SUBTYPE_TVM_MRVL) {
lcl_qbuffer = qbuffer[i]->addr;
lcl_dbuffer = dbuffer[i]->addr;
} else {
@@ -1528,7 +1531,8 @@ cnxk_ml_io_dequantize(struct rte_ml_dev *dev, uint16_t model_id, struct rte_ml_b
if (ret < 0)
return ret;
- if (model->type == ML_CNXK_MODEL_TYPE_GLOW) {
+ if ((model->type == ML_CNXK_MODEL_TYPE_GLOW) ||
+ (model->subtype == ML_CNXK_MODEL_SUBTYPE_TVM_MRVL)) {
q_offset += info->output[i].sz_q;
d_offset += info->output[i].sz_d;
}
diff --git a/drivers/ml/cnxk/mvtvm_ml_model.c b/drivers/ml/cnxk/mvtvm_ml_model.c
index 0dbe08e988..bbda907714 100644
--- a/drivers/ml/cnxk/mvtvm_ml_model.c
+++ b/drivers/ml/cnxk/mvtvm_ml_model.c
@@ -352,7 +352,7 @@ tvm_mrvl_model:
metadata = &model->mvtvm.metadata;
strlcpy(info->name, metadata->model.name, TVMDP_NAME_STRLEN);
- info->io_layout = RTE_ML_IO_LAYOUT_SPLIT;
+ info->io_layout = RTE_ML_IO_LAYOUT_PACKED;
}
void
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.803313506 +0800
+++ 0033-ml-cnxk-fix-handling-of-TVM-model-I-O.patch 2024-11-11 14:23:05.082192840 +0800
@@ -1 +1 @@
-From c4636d36bc2cc3a370200245da69006d6f5d9852 Mon Sep 17 00:00:00 2001
+From e8a44520dc4a80da0f3d64f4fb9900820c783893 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c4636d36bc2cc3a370200245da69006d6f5d9852 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -65 +67 @@
-index 3ada6f42db..3c5ab0d2e1 100644
+index 0dbe08e988..bbda907714 100644
@@ -68 +70 @@
-@@ -356,7 +356,7 @@ tvm_mrvl_model:
+@@ -352,7 +352,7 @@ tvm_mrvl_model:
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cnxk: fix Rx timestamp handling for VF' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (32 preceding siblings ...)
2024-11-11 6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
` (86 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=54799745107bd25078888a57d1e185e57763075b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 54799745107bd25078888a57d1e185e57763075b Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:40 +0530
Subject: [PATCH] net/cnxk: fix Rx timestamp handling for VF
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0efd93a2740d1ab13fc55656ce9e55f79e09c4f3 ]
When timestamp is enabled on PF in kernel and respective
VF is attached to application in DPDK then mbuf_addr is getting
corrupted in cnxk_nix_timestamp_dynfield() as
"tstamp_dynfield_offset" is zero for PTP enabled PF
This patch fixes the same.
Fixes: 76dff63874e3 ("net/cnxk: support base PTP timesync")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 12 +++++++++++-
drivers/net/cnxk/cn9k_ethdev.c | 12 +++++++++++-
drivers/net/cnxk/cnxk_ethdev.c | 2 +-
3 files changed, 23 insertions(+), 3 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 29b7f2ba5e..24c4c2d15e 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -473,7 +473,7 @@ cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
struct cnxk_eth_dev *dev = (struct cnxk_eth_dev *)nix;
struct rte_eth_dev *eth_dev;
struct cn10k_eth_rxq *rxq;
- int i;
+ int i, rc;
if (!dev)
return -EINVAL;
@@ -496,7 +496,17 @@ cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
* and MTU setting also requires MBOX message to be
* sent(VF->PF)
*/
+ if (dev->ptp_en) {
+ rc = rte_mbuf_dyn_rx_timestamp_register
+ (&dev->tstamp.tstamp_dynfield_offset,
+ &dev->tstamp.rx_tstamp_dynflag);
+ if (rc != 0) {
+ plt_err("Failed to register Rx timestamp field/flag");
+ return -EINVAL;
+ }
+ }
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
+ rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
}
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index b92b978a27..c06764d745 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -432,7 +432,7 @@ cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
struct cnxk_eth_dev *dev = (struct cnxk_eth_dev *)nix;
struct rte_eth_dev *eth_dev;
struct cn9k_eth_rxq *rxq;
- int i;
+ int i, rc;
if (!dev)
return -EINVAL;
@@ -455,7 +455,17 @@ cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
* and MTU setting also requires MBOX message to be
* sent(VF->PF)
*/
+ if (dev->ptp_en) {
+ rc = rte_mbuf_dyn_rx_timestamp_register
+ (&dev->tstamp.tstamp_dynfield_offset,
+ &dev->tstamp.rx_tstamp_dynflag);
+ if (rc != 0) {
+ plt_err("Failed to register Rx timestamp field/flag");
+ return -EINVAL;
+ }
+ }
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
+ rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
}
diff --git a/drivers/net/cnxk/cnxk_ethdev.c b/drivers/net/cnxk/cnxk_ethdev.c
index 60baf806ab..f0cf376e7d 100644
--- a/drivers/net/cnxk/cnxk_ethdev.c
+++ b/drivers/net/cnxk/cnxk_ethdev.c
@@ -1734,7 +1734,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
else
cnxk_eth_dev_ops.timesync_disable(eth_dev);
- if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+ if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP || dev->ptp_en) {
rc = rte_mbuf_dyn_rx_timestamp_register
(&dev->tstamp.tstamp_dynfield_offset,
&dev->tstamp.rx_tstamp_dynflag);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.847757505 +0800
+++ 0034-net-cnxk-fix-Rx-timestamp-handling-for-VF.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From 0efd93a2740d1ab13fc55656ce9e55f79e09c4f3 Mon Sep 17 00:00:00 2001
+From 54799745107bd25078888a57d1e185e57763075b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0efd93a2740d1ab13fc55656ce9e55f79e09c4f3 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index ad6bc1ec21..46476e386a 100644
+index 29b7f2ba5e..24c4c2d15e 100644
@@ -54 +56 @@
-index 84c88655f8..5417628368 100644
+index b92b978a27..c06764d745 100644
@@ -85 +87 @@
-index 33bac55704..74b266ad58 100644
+index 60baf806ab..f0cf376e7d 100644
@@ -88 +90 @@
-@@ -1751,7 +1751,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
+@@ -1734,7 +1734,7 @@ cnxk_nix_dev_start(struct rte_eth_dev *eth_dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cnxk: fix Rx offloads to handle timestamp' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (33 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
` (85 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=118f84d541146b7a073ca53849069f4be15b7036
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 118f84d541146b7a073ca53849069f4be15b7036 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:41 +0530
Subject: [PATCH] net/cnxk: fix Rx offloads to handle timestamp
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f12dab814f0898c661d32f6cdaaae6a11bbacb6e ]
RX offloads flags are updated to handle timestamp
in VF when PTP is enabled in respective PF in kernel.
Fixes: c7c7c8ed7d47 ("net/cnxk: get PTP status")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev.c | 6 +++++-
drivers/net/cnxk/cn9k_ethdev.c | 5 ++++-
drivers/net/cnxk/cnxk_ethdev.h | 7 +++++++
3 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/net/cnxk/cn10k_ethdev.c b/drivers/net/cnxk/cn10k_ethdev.c
index 24c4c2d15e..3b7de891e0 100644
--- a/drivers/net/cnxk/cn10k_ethdev.c
+++ b/drivers/net/cnxk/cn10k_ethdev.c
@@ -30,7 +30,7 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -508,6 +508,10 @@ cn10k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
+ if (dev->cnxk_sso_ptp_tstamp_cb)
+ dev->cnxk_sso_ptp_tstamp_cb(eth_dev->data->port_id,
+ NIX_RX_OFFLOAD_TSTAMP_F, dev->ptp_en);
+
}
return 0;
diff --git a/drivers/net/cnxk/cn9k_ethdev.c b/drivers/net/cnxk/cn9k_ethdev.c
index c06764d745..dee0abdac5 100644
--- a/drivers/net/cnxk/cn9k_ethdev.c
+++ b/drivers/net/cnxk/cn9k_ethdev.c
@@ -30,7 +30,7 @@ nix_rx_offload_flags(struct rte_eth_dev *eth_dev)
if (dev->rx_offloads & RTE_ETH_RX_OFFLOAD_SCATTER)
flags |= NIX_RX_MULTI_SEG_F;
- if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP))
+ if ((dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) || dev->ptp_en)
flags |= NIX_RX_OFFLOAD_TSTAMP_F;
if (!dev->ptype_disable)
@@ -467,6 +467,9 @@ cn9k_nix_ptp_info_update_cb(struct roc_nix *nix, bool ptp_en)
eth_dev->rx_pkt_burst = nix_ptp_vf_burst;
rte_eth_fp_ops[eth_dev->data->port_id].rx_pkt_burst = eth_dev->rx_pkt_burst;
rte_mb();
+ if (dev->cnxk_sso_ptp_tstamp_cb)
+ dev->cnxk_sso_ptp_tstamp_cb(eth_dev->data->port_id,
+ NIX_RX_OFFLOAD_TSTAMP_F, dev->ptp_en);
}
return 0;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index 4d3ebf123b..edbb492e2c 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -424,6 +424,13 @@ struct cnxk_eth_dev {
/* MCS device */
struct cnxk_mcs_dev *mcs_dev;
struct cnxk_macsec_sess_list mcs_list;
+
+ /* SSO event dev */
+ void *evdev_priv;
+
+ /* SSO event dev ptp */
+ void (*cnxk_sso_ptp_tstamp_cb)
+ (uint16_t port_id, uint16_t flags, bool ptp_en);
};
struct cnxk_eth_rxq_sp {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.885498904 +0800
+++ 0035-net-cnxk-fix-Rx-offloads-to-handle-timestamp.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From f12dab814f0898c661d32f6cdaaae6a11bbacb6e Mon Sep 17 00:00:00 2001
+From 118f84d541146b7a073ca53849069f4be15b7036 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f12dab814f0898c661d32f6cdaaae6a11bbacb6e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 46476e386a..f5b485650e 100644
+index 24c4c2d15e..3b7de891e0 100644
@@ -44 +46 @@
-index 5417628368..c419593a23 100644
+index c06764d745..dee0abdac5 100644
@@ -67 +69 @@
-index 687c60c27d..5920488e1a 100644
+index 4d3ebf123b..edbb492e2c 100644
@@ -70,4 +72,4 @@
-@@ -433,6 +433,13 @@ struct cnxk_eth_dev {
-
- /* Eswitch domain ID */
- uint16_t switch_domain_id;
+@@ -424,6 +424,13 @@ struct cnxk_eth_dev {
+ /* MCS device */
+ struct cnxk_mcs_dev *mcs_dev;
+ struct cnxk_macsec_sess_list mcs_list;
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'event/cnxk: fix Rx timestamp handling' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (34 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
` (84 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7b06dc0a409f787ba2f45c98d97340effa6a6ff1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7b06dc0a409f787ba2f45c98d97340effa6a6ff1 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:42 +0530
Subject: [PATCH] event/cnxk: fix Rx timestamp handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 697883bcb0a84f06b52064ecbf60c619edbf9083 ]
Handle timestamp correctly for VF when PTP is enabled
before running application in event mode by updating
RX offload flags in link up notification.
Fixes: f1cdb3c5b616 ("net/cnxk: enable PTP for event Rx adapter")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 32 ++++++++++++++++++++++++
drivers/event/cnxk/cn9k_eventdev.c | 31 +++++++++++++++++++++++
drivers/event/cnxk/cnxk_eventdev_adptr.c | 2 +-
3 files changed, 64 insertions(+), 1 deletion(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index bb0c910553..9f1d01f048 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -782,12 +782,40 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
}
}
+static void
+eventdev_fops_tstamp_update(struct rte_eventdev *event_dev)
+{
+ struct rte_event_fp_ops *fp_op =
+ rte_event_fp_ops + event_dev->data->dev_id;
+
+ fp_op->dequeue = event_dev->dequeue;
+ fp_op->dequeue_burst = event_dev->dequeue_burst;
+}
+
+static void
+cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct cnxk_eth_dev *cnxk_eth_dev = dev->data->dev_private;
+ struct rte_eventdev *event_dev = cnxk_eth_dev->evdev_priv;
+ struct cnxk_sso_evdev *evdev = cnxk_sso_pmd_priv(event_dev);
+
+ evdev->rx_offloads |= flags;
+ if (ptp_en)
+ evdev->tstamp[port_id] = &cnxk_eth_dev->tstamp;
+ else
+ evdev->tstamp[port_id] = NULL;
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+ eventdev_fops_tstamp_update(event_dev);
+}
+
static int
cn10k_sso_rx_adapter_queue_add(
const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
int32_t rx_queue_id,
const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev);
struct roc_sso_hwgrp_stash stash;
struct cn10k_eth_rxq *rxq;
@@ -802,6 +830,10 @@ cn10k_sso_rx_adapter_queue_add(
queue_conf);
if (rc)
return -EINVAL;
+
+ cnxk_eth_dev->cnxk_sso_ptp_tstamp_cb = cn10k_sso_tstamp_hdl_update;
+ cnxk_eth_dev->evdev_priv = (struct rte_eventdev *)(uintptr_t)event_dev;
+
rxq = eth_dev->data->rx_queues[0];
lookup_mem = rxq->lookup_mem;
cn10k_sso_set_priv_mem(event_dev, lookup_mem);
diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c
index 9fb9ca0d63..ec3022b38c 100644
--- a/drivers/event/cnxk/cn9k_eventdev.c
+++ b/drivers/event/cnxk/cn9k_eventdev.c
@@ -834,12 +834,40 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
}
}
+static void
+eventdev_fops_tstamp_update(struct rte_eventdev *event_dev)
+{
+ struct rte_event_fp_ops *fp_op =
+ rte_event_fp_ops + event_dev->data->dev_id;
+
+ fp_op->dequeue = event_dev->dequeue;
+ fp_op->dequeue_burst = event_dev->dequeue_burst;
+}
+
+static void
+cn9k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct cnxk_eth_dev *cnxk_eth_dev = dev->data->dev_private;
+ struct rte_eventdev *event_dev = cnxk_eth_dev->evdev_priv;
+ struct cnxk_sso_evdev *evdev = cnxk_sso_pmd_priv(event_dev);
+
+ evdev->rx_offloads |= flags;
+ if (ptp_en)
+ evdev->tstamp[port_id] = &cnxk_eth_dev->tstamp;
+ else
+ evdev->tstamp[port_id] = NULL;
+ cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+ eventdev_fops_tstamp_update(event_dev);
+}
+
static int
cn9k_sso_rx_adapter_queue_add(
const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev,
int32_t rx_queue_id,
const struct rte_event_eth_rx_adapter_queue_conf *queue_conf)
{
+ struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private;
struct cn9k_eth_rxq *rxq;
void *lookup_mem;
int rc;
@@ -853,6 +881,9 @@ cn9k_sso_rx_adapter_queue_add(
if (rc)
return -EINVAL;
+ cnxk_eth_dev->cnxk_sso_ptp_tstamp_cb = cn9k_sso_tstamp_hdl_update;
+ cnxk_eth_dev->evdev_priv = (struct rte_eventdev *)(uintptr_t)event_dev;
+
rxq = eth_dev->data->rx_queues[0];
lookup_mem = rxq->lookup_mem;
cn9k_sso_set_priv_mem(event_dev, lookup_mem);
diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c
index 92aea92389..fe905b5461 100644
--- a/drivers/event/cnxk/cnxk_eventdev_adptr.c
+++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c
@@ -212,7 +212,7 @@ static void
cnxk_sso_tstamp_cfg(uint16_t port_id, struct cnxk_eth_dev *cnxk_eth_dev,
struct cnxk_sso_evdev *dev)
{
- if (cnxk_eth_dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ if (cnxk_eth_dev->rx_offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP || cnxk_eth_dev->ptp_en)
dev->tstamp[port_id] = &cnxk_eth_dev->tstamp;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.922928903 +0800
+++ 0036-event-cnxk-fix-Rx-timestamp-handling.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From 697883bcb0a84f06b52064ecbf60c619edbf9083 Mon Sep 17 00:00:00 2001
+From 7b06dc0a409f787ba2f45c98d97340effa6a6ff1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 697883bcb0a84f06b52064ecbf60c619edbf9083 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 5bd779990e..c8767a1b2b 100644
+index bb0c910553..9f1d01f048 100644
@@ -24 +26 @@
-@@ -842,12 +842,40 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+@@ -782,12 +782,40 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
@@ -65 +67 @@
-@@ -862,6 +890,10 @@ cn10k_sso_rx_adapter_queue_add(
+@@ -802,6 +830,10 @@ cn10k_sso_rx_adapter_queue_add(
@@ -77 +79 @@
-index 28350d1275..377e910837 100644
+index 9fb9ca0d63..ec3022b38c 100644
@@ -80 +82 @@
-@@ -911,12 +911,40 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+@@ -834,12 +834,40 @@ cn9k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
@@ -121 +123 @@
-@@ -930,6 +958,9 @@ cn9k_sso_rx_adapter_queue_add(
+@@ -853,6 +881,9 @@ cn9k_sso_rx_adapter_queue_add(
@@ -132 +134 @@
-index 2c049e7041..3cac42111a 100644
+index 92aea92389..fe905b5461 100644
@@ -135 +137 @@
-@@ -213,7 +213,7 @@ static void
+@@ -212,7 +212,7 @@ static void
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix MAC address change with active VF' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (35 preceding siblings ...)
2024-11-11 6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
` (83 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1ba168d355ced3b1fb8513939d0dce8df4a5c8f1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1ba168d355ced3b1fb8513939d0dce8df4a5c8f1 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com>
Date: Tue, 1 Oct 2024 11:30:45 +0530
Subject: [PATCH] common/cnxk: fix MAC address change with active VF
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2d4505dc6d4b541710f1c178ee0b309fab4d2ee8 ]
If device is in reconfigure state then it throws error while
changing default MAC or adding new MAC in LMAC filter table
if there are active VFs on a PF.
Allowing MAC address set/add even active VFs are present on
PF.
Fixes: 313cc41830ec ("common/cnxk: support NIX MAC operations")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/roc_nix_mac.c | 10 ----------
1 file changed, 10 deletions(-)
diff --git a/drivers/common/cnxk/roc_nix_mac.c b/drivers/common/cnxk/roc_nix_mac.c
index 2d1c29dd66..ce3fb034c5 100644
--- a/drivers/common/cnxk/roc_nix_mac.c
+++ b/drivers/common/cnxk/roc_nix_mac.c
@@ -91,11 +91,6 @@ roc_nix_mac_addr_set(struct roc_nix *roc_nix, const uint8_t addr[])
goto exit;
}
- if (dev_active_vfs(&nix->dev)) {
- rc = NIX_ERR_OP_NOTSUP;
- goto exit;
- }
-
req = mbox_alloc_msg_cgx_mac_addr_set(mbox);
if (req == NULL)
goto exit;
@@ -152,11 +147,6 @@ roc_nix_mac_addr_add(struct roc_nix *roc_nix, uint8_t addr[])
goto exit;
}
- if (dev_active_vfs(&nix->dev)) {
- rc = NIX_ERR_OP_NOTSUP;
- goto exit;
- }
-
req = mbox_alloc_msg_cgx_mac_addr_add(mbox);
mbox_memcpy(req->mac_addr, addr, PLT_ETHER_ADDR_LEN);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.954124602 +0800
+++ 0037-common-cnxk-fix-MAC-address-change-with-active-VF.patch 2024-11-11 14:23:05.092192840 +0800
@@ -1 +1 @@
-From 2d4505dc6d4b541710f1c178ee0b309fab4d2ee8 Mon Sep 17 00:00:00 2001
+From 1ba168d355ced3b1fb8513939d0dce8df4a5c8f1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2d4505dc6d4b541710f1c178ee0b309fab4d2ee8 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 0ffd05e4d4..54db1adf17 100644
+index 2d1c29dd66..ce3fb034c5 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix inline CTX write' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (36 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
` (82 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=06df4e3ae1eaaabc9d325cd40ab792a75d2ae3ba
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 06df4e3ae1eaaabc9d325cd40ab792a75d2ae3ba Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Tue, 1 Oct 2024 11:30:47 +0530
Subject: [PATCH] common/cnxk: fix inline CTX write
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 6c3de40af8362d2d7eede3b4fd12075fce964f4d ]
Reading a CPT_LF_CTX_ERR csr will ensure writes for
FLUSH are complete and also tell whether flush is
complete or not.
Fixes: 71213a8b773c ("common/cnxk: support CPT CTX write through microcode op")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_nix_inl.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/common/cnxk/roc_nix_inl.c b/drivers/common/cnxk/roc_nix_inl.c
index bc9cc2f429..ba51ddd8c8 100644
--- a/drivers/common/cnxk/roc_nix_inl.c
+++ b/drivers/common/cnxk/roc_nix_inl.c
@@ -1669,6 +1669,7 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
struct nix_inl_dev *inl_dev = NULL;
struct roc_cpt_lf *outb_lf = NULL;
union cpt_lf_ctx_flush flush;
+ union cpt_lf_ctx_err err;
bool get_inl_lf = true;
uintptr_t rbase;
struct nix *nix;
@@ -1710,6 +1711,13 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
flush.s.cptr = ((uintptr_t)sa_cptr) >> 7;
plt_write64(flush.u, rbase + CPT_LF_CTX_FLUSH);
+ plt_atomic_thread_fence(__ATOMIC_ACQ_REL);
+
+ /* Read a CSR to ensure that the FLUSH operation is complete */
+ err.u = plt_read64(rbase + CPT_LF_CTX_ERR);
+
+ if (err.s.flush_st_flt)
+ plt_warn("CTX flush could not complete");
return 0;
}
plt_nix_dbg("Could not get CPT LF for CTX write");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:06.989298502 +0800
+++ 0038-common-cnxk-fix-inline-CTX-write.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 6c3de40af8362d2d7eede3b4fd12075fce964f4d Mon Sep 17 00:00:00 2001
+From 06df4e3ae1eaaabc9d325cd40ab792a75d2ae3ba Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 6c3de40af8362d2d7eede3b4fd12075fce964f4d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index a984ac56d9..d0328921a7 100644
+index bc9cc2f429..ba51ddd8c8 100644
@@ -22 +24 @@
-@@ -1748,6 +1748,7 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
+@@ -1669,6 +1669,7 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
@@ -30 +32 @@
-@@ -1789,6 +1790,13 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
+@@ -1710,6 +1711,13 @@ roc_nix_inl_ctx_write(struct roc_nix *roc_nix, void *sa_dptr, void *sa_cptr,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix CPT HW word size for outbound SA' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (37 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
` (81 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Nithin Dabilpuram; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f7e17fe99ebca664289173ce71eaac65979caf85
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f7e17fe99ebca664289173ce71eaac65979caf85 Mon Sep 17 00:00:00 2001
From: Nithin Dabilpuram <ndabilpuram@marvell.com>
Date: Tue, 1 Oct 2024 11:30:48 +0530
Subject: [PATCH] common/cnxk: fix CPT HW word size for outbound SA
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9587a324f28e84937c9efef534da542c30ff122b ]
Fix the CPT HW word size inited for outbound SA to be
two words.
Fixes: 5ece02e736c3 ("common/cnxk: use common SA init API for default options")
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com>
---
drivers/common/cnxk/roc_ie_ot.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/common/cnxk/roc_ie_ot.c b/drivers/common/cnxk/roc_ie_ot.c
index d0b7ad38f1..356bb8c5a5 100644
--- a/drivers/common/cnxk/roc_ie_ot.c
+++ b/drivers/common/cnxk/roc_ie_ot.c
@@ -38,5 +38,6 @@ roc_ot_ipsec_outb_sa_init(struct roc_ot_ipsec_outb_sa *sa)
offset = offsetof(struct roc_ot_ipsec_outb_sa, ctx);
sa->w0.s.ctx_push_size = (offset / ROC_CTX_UNIT_8B) + 1;
sa->w0.s.ctx_size = ROC_IE_OT_CTX_ILEN;
+ sa->w0.s.ctx_hdr_size = ROC_IE_OT_SA_CTX_HDR_SIZE;
sa->w0.s.aop_valid = 1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.021158301 +0800
+++ 0039-common-cnxk-fix-CPT-HW-word-size-for-outbound-SA.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 9587a324f28e84937c9efef534da542c30ff122b Mon Sep 17 00:00:00 2001
+From f7e17fe99ebca664289173ce71eaac65979caf85 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9587a324f28e84937c9efef534da542c30ff122b ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 465b2bc1fb..1b436dba72 100644
+index d0b7ad38f1..356bb8c5a5 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cnxk: fix OOP handling for inbound packets' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (38 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
` (80 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=92230f6939d975f98fceaa01cf21af926741e87d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 92230f6939d975f98fceaa01cf21af926741e87d Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:54 +0530
Subject: [PATCH] net/cnxk: fix OOP handling for inbound packets
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d524a5526efa6b4cc01d13d8d50785c08d9b6891 ]
To handle OOP for inbound packet, processing
is done based on NIX_RX_REAS_F flag. However, for the
SKUs that does not support reassembly Inbound Out-Of-Place
processing test case fails because reassembly flag is not
updated in event mode. This patch fixes the same.
Fixes: 5e9e008d0127 ("net/cnxk: support inline ingress out-of-place session")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/net/cnxk/cn10k_ethdev_sec.c | 10 ++++++++++
drivers/net/cnxk/cnxk_ethdev.h | 4 ++++
drivers/net/cnxk/version.map | 1 +
3 files changed, 15 insertions(+)
diff --git a/drivers/net/cnxk/cn10k_ethdev_sec.c b/drivers/net/cnxk/cn10k_ethdev_sec.c
index 4719f6b863..47822a3d84 100644
--- a/drivers/net/cnxk/cn10k_ethdev_sec.c
+++ b/drivers/net/cnxk/cn10k_ethdev_sec.c
@@ -14,6 +14,13 @@
#include <cnxk_security.h>
#include <roc_priv.h>
+cnxk_ethdev_rx_offload_cb_t cnxk_ethdev_rx_offload_cb;
+void
+cnxk_ethdev_rx_offload_cb_register(cnxk_ethdev_rx_offload_cb_t cb)
+{
+ cnxk_ethdev_rx_offload_cb = cb;
+}
+
static struct rte_cryptodev_capabilities cn10k_eth_sec_crypto_caps[] = {
{ /* AES GCM */
.op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
@@ -891,6 +898,9 @@ cn10k_eth_sec_session_create(void *device,
!(dev->rx_offload_flags & NIX_RX_REAS_F)) {
dev->rx_offload_flags |= NIX_RX_REAS_F;
cn10k_eth_set_rx_function(eth_dev);
+ if (cnxk_ethdev_rx_offload_cb)
+ cnxk_ethdev_rx_offload_cb(eth_dev->data->port_id,
+ NIX_RX_REAS_F);
}
} else {
struct roc_ot_ipsec_outb_sa *outb_sa, *outb_sa_dptr;
diff --git a/drivers/net/cnxk/cnxk_ethdev.h b/drivers/net/cnxk/cnxk_ethdev.h
index edbb492e2c..138d206987 100644
--- a/drivers/net/cnxk/cnxk_ethdev.h
+++ b/drivers/net/cnxk/cnxk_ethdev.h
@@ -640,6 +640,10 @@ int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
int cnxk_nix_lookup_mem_metapool_clear(struct cnxk_eth_dev *dev);
__rte_internal
int cnxk_nix_inb_mode_set(struct cnxk_eth_dev *dev, bool use_inl_dev);
+typedef void (*cnxk_ethdev_rx_offload_cb_t)(uint16_t port_id, uint64_t flags);
+__rte_internal
+void cnxk_ethdev_rx_offload_cb_register(cnxk_ethdev_rx_offload_cb_t cb);
+
struct cnxk_eth_sec_sess *cnxk_eth_sec_sess_get_by_spi(struct cnxk_eth_dev *dev,
uint32_t spi, bool inb);
struct cnxk_eth_sec_sess *
diff --git a/drivers/net/cnxk/version.map b/drivers/net/cnxk/version.map
index 77f574bb16..078456a9ed 100644
--- a/drivers/net/cnxk/version.map
+++ b/drivers/net/cnxk/version.map
@@ -16,4 +16,5 @@ EXPERIMENTAL {
INTERNAL {
global:
cnxk_nix_inb_mode_set;
+ cnxk_ethdev_rx_offload_cb_register;
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.052138200 +0800
+++ 0040-net-cnxk-fix-OOP-handling-for-inbound-packets.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From d524a5526efa6b4cc01d13d8d50785c08d9b6891 Mon Sep 17 00:00:00 2001
+From 92230f6939d975f98fceaa01cf21af926741e87d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d524a5526efa6b4cc01d13d8d50785c08d9b6891 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index c9cb540e85..6acab8afa0 100644
+index 4719f6b863..47822a3d84 100644
@@ -26,3 +28,3 @@
-@@ -28,6 +28,13 @@ PLT_STATIC_ASSERT(RTE_PMD_CNXK_AR_WIN_SIZE_MAX == ROC_AR_WIN_SIZE_MAX);
- PLT_STATIC_ASSERT(RTE_PMD_CNXK_LOG_MIN_AR_WIN_SIZE_M1 == ROC_LOG_MIN_AR_WIN_SIZE_M1);
- PLT_STATIC_ASSERT(RTE_PMD_CNXK_AR_WINBITS_SZ == ROC_AR_WINBITS_SZ);
+@@ -14,6 +14,13 @@
+ #include <cnxk_security.h>
+ #include <roc_priv.h>
@@ -40 +42 @@
-@@ -908,6 +915,9 @@ cn10k_eth_sec_session_create(void *device,
+@@ -891,6 +898,9 @@ cn10k_eth_sec_session_create(void *device,
@@ -51 +53 @@
-index d4440b25ac..350adc1161 100644
+index edbb492e2c..138d206987 100644
@@ -54 +56 @@
-@@ -725,6 +725,10 @@ int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
+@@ -640,6 +640,10 @@ int cnxk_nix_lookup_mem_metapool_set(struct cnxk_eth_dev *dev);
@@ -66 +68 @@
-index 099c518ecf..edb0a1c059 100644
+index 77f574bb16..078456a9ed 100644
@@ -69 +71 @@
-@@ -23,4 +23,5 @@ EXPERIMENTAL {
+@@ -16,4 +16,5 @@ EXPERIMENTAL {
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'event/cnxk: fix OOP handling in event mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (39 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
` (79 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=80a61a0a00e593bc6242a9e8f0a210936885c9b3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 80a61a0a00e593bc6242a9e8f0a210936885c9b3 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 11:30:55 +0530
Subject: [PATCH] event/cnxk: fix OOP handling in event mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 01a990fe40e827c5f3497f785ce7fd68bff8ef5c ]
Update event device with NIX_RX_REAS_F to handle
out of place processing for SKUs that does not
support reassembly as cn10k driver process OOP
with NIX_RX_REAS_F enabled.
Fixes: 5e9e008d0127 ("net/cnxk: support inline ingress out-of-place session")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/event/cnxk/cn10k_eventdev.c | 18 ++++++++++++++++--
1 file changed, 16 insertions(+), 2 deletions(-)
diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c
index 9f1d01f048..a44a33eae8 100644
--- a/drivers/event/cnxk/cn10k_eventdev.c
+++ b/drivers/event/cnxk/cn10k_eventdev.c
@@ -783,7 +783,7 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
}
static void
-eventdev_fops_tstamp_update(struct rte_eventdev *event_dev)
+eventdev_fops_update(struct rte_eventdev *event_dev)
{
struct rte_event_fp_ops *fp_op =
rte_event_fp_ops + event_dev->data->dev_id;
@@ -806,7 +806,20 @@ cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
else
evdev->tstamp[port_id] = NULL;
cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
- eventdev_fops_tstamp_update(event_dev);
+ eventdev_fops_update(event_dev);
+}
+
+static void
+cn10k_sso_rx_offload_cb(uint16_t port_id, uint64_t flags)
+{
+ struct rte_eth_dev *dev = &rte_eth_devices[port_id];
+ struct cnxk_eth_dev *cnxk_eth_dev = dev->data->dev_private;
+ struct rte_eventdev *event_dev = cnxk_eth_dev->evdev_priv;
+ struct cnxk_sso_evdev *evdev = cnxk_sso_pmd_priv(event_dev);
+
+ evdev->rx_offloads |= flags;
+ cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev);
+ eventdev_fops_update(event_dev);
}
static int
@@ -1116,6 +1129,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
return rc;
}
+ cnxk_ethdev_rx_offload_cb_register(cn10k_sso_rx_offload_cb);
event_dev->dev_ops = &cn10k_sso_dev_ops;
/* For secondary processes, the primary has done all the work */
if (rte_eal_process_type() != RTE_PROC_PRIMARY) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.082527300 +0800
+++ 0041-event-cnxk-fix-OOP-handling-in-event-mode.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 01a990fe40e827c5f3497f785ce7fd68bff8ef5c Mon Sep 17 00:00:00 2001
+From 80a61a0a00e593bc6242a9e8f0a210936885c9b3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 01a990fe40e827c5f3497f785ce7fd68bff8ef5c ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index c8767a1b2b..531c489172 100644
+index 9f1d01f048..a44a33eae8 100644
@@ -23 +25 @@
-@@ -843,7 +843,7 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
+@@ -783,7 +783,7 @@ cn10k_sso_set_priv_mem(const struct rte_eventdev *event_dev, void *lookup_mem)
@@ -32 +34 @@
-@@ -866,7 +866,20 @@ cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
+@@ -806,7 +806,20 @@ cn10k_sso_tstamp_hdl_update(uint16_t port_id, uint16_t flags, bool ptp_en)
@@ -54 +56 @@
-@@ -1241,6 +1254,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
+@@ -1116,6 +1129,7 @@ cn10k_sso_init(struct rte_eventdev *event_dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix base log level' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (40 preceding siblings ...)
2024-11-11 6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
` (78 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Rakesh Kudurumalla; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0dca79e1ec0c120c1059c4b015651e4bf1a5f309
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0dca79e1ec0c120c1059c4b015651e4bf1a5f309 Mon Sep 17 00:00:00 2001
From: Rakesh Kudurumalla <rkudurumalla@marvell.com>
Date: Tue, 1 Oct 2024 14:17:10 +0530
Subject: [PATCH] common/cnxk: fix base log level
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit adc561fc5352bd1f1c8e736a33bb9b03bbb95b3f ]
In a247fcd94598 changeset, the PMD log type is removed and
driver specific log type is added for CNXK.
This patch changes loglevel of CNXK from NOTICE to INFO
to display logs while running applications
Fixes: a247fcd94598 ("drivers: use dedicated log macros instead of PMD logtype")
Signed-off-by: Rakesh Kudurumalla <rkudurumalla@marvell.com>
---
drivers/common/cnxk/roc_platform.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_platform.c b/drivers/common/cnxk/roc_platform.c
index 80d81742a2..c57dcbe731 100644
--- a/drivers/common/cnxk/roc_platform.c
+++ b/drivers/common/cnxk/roc_platform.c
@@ -85,7 +85,7 @@ roc_plt_init(void)
return 0;
}
-RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_base, base, NOTICE);
+RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_base, base, INFO);
RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_mbox, mbox, NOTICE);
RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_cpt, crypto, NOTICE);
RTE_LOG_REGISTER_SUFFIX(cnxk_logtype_ml, ml, NOTICE);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.115108999 +0800
+++ 0042-common-cnxk-fix-base-log-level.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From adc561fc5352bd1f1c8e736a33bb9b03bbb95b3f Mon Sep 17 00:00:00 2001
+From 0dca79e1ec0c120c1059c4b015651e4bf1a5f309 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit adc561fc5352bd1f1c8e736a33bb9b03bbb95b3f ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 30379c7e5e..f1e0a93d97 100644
+index 80d81742a2..c57dcbe731 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix IRQ reconfiguration' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (41 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
` (77 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Pavan Nikhilesh; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e1e6e73a44dcd1e62731ed963bdd3b15fe1b7463
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e1e6e73a44dcd1e62731ed963bdd3b15fe1b7463 Mon Sep 17 00:00:00 2001
From: Pavan Nikhilesh <pbhagavatula@marvell.com>
Date: Tue, 1 Oct 2024 18:41:09 +0530
Subject: [PATCH] common/cnxk: fix IRQ reconfiguration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 758b58f06a43564f435e3ecc1a8af994564a6b6b ]
Unregister SSO device and NPA IRQs before resizing
IRQs to cleanup stale IRQ handles.
Fixes: 993107f0f440 ("common/cnxk: limit SSO interrupt allocation count")
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
drivers/common/cnxk/roc_dev.c | 16 +++++++---------
drivers/common/cnxk/roc_dev_priv.h | 2 ++
drivers/common/cnxk/roc_sso.c | 7 +++++++
3 files changed, 16 insertions(+), 9 deletions(-)
diff --git a/drivers/common/cnxk/roc_dev.c b/drivers/common/cnxk/roc_dev.c
index 35eb8b7628..793d78fdbc 100644
--- a/drivers/common/cnxk/roc_dev.c
+++ b/drivers/common/cnxk/roc_dev.c
@@ -947,8 +947,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
RVU_VF_INT_VEC_MBOX);
}
-static void
-mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+void
+dev_mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev)
{
if (dev_is_vf(dev))
mbox_unregister_vf_irq(pci_dev, dev);
@@ -1026,8 +1026,8 @@ roc_pf_vf_flr_irq(void *param)
}
}
-static int
-vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
+void
+dev_vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
{
struct plt_intr_handle *intr_handle = pci_dev->intr_handle;
int i;
@@ -1043,8 +1043,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
dev_irq_unregister(intr_handle, roc_pf_vf_flr_irq, dev,
RVU_PF_INT_VEC_VFFLR1);
-
- return 0;
}
int
@@ -1529,7 +1527,7 @@ thread_fail:
iounmap:
dev_vf_mbase_put(pci_dev, vf_mbase);
mbox_unregister:
- mbox_unregister_irq(pci_dev, dev);
+ dev_mbox_unregister_irq(pci_dev, dev);
if (dev->ops)
plt_free(dev->ops);
mbox_fini:
@@ -1565,10 +1563,10 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
if (dev->lmt_mz)
plt_memzone_free(dev->lmt_mz);
- mbox_unregister_irq(pci_dev, dev);
+ dev_mbox_unregister_irq(pci_dev, dev);
if (!dev_is_vf(dev))
- vf_flr_unregister_irqs(pci_dev, dev);
+ dev_vf_flr_unregister_irqs(pci_dev, dev);
/* Release PF - VF */
mbox = &dev->mbox_vfpf;
if (mbox->hwbase && mbox->dev)
diff --git a/drivers/common/cnxk/roc_dev_priv.h b/drivers/common/cnxk/roc_dev_priv.h
index 5b2c5096f8..f1fa498dc1 100644
--- a/drivers/common/cnxk/roc_dev_priv.h
+++ b/drivers/common/cnxk/roc_dev_priv.h
@@ -128,6 +128,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle);
int dev_irq_reconfigure(struct plt_intr_handle *intr_handle, uint16_t max_intr);
int dev_mbox_register_irq(struct plt_pci_device *pci_dev, struct dev *dev);
+void dev_mbox_unregister_irq(struct plt_pci_device *pci_dev, struct dev *dev);
int dev_vf_flr_register_irqs(struct plt_pci_device *pci_dev, struct dev *dev);
+void dev_vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev);
#endif /* _ROC_DEV_PRIV_H */
diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c
index b02c9c7f38..14cdf14554 100644
--- a/drivers/common/cnxk/roc_sso.c
+++ b/drivers/common/cnxk/roc_sso.c
@@ -765,7 +765,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, uint16_t sso_vec_cnt)
return dev_irq_reconfigure(pci_dev->intr_handle, mbox_vec_cnt + npa_vec_cnt);
}
+ /* Before re-configuring unregister irqs */
npa_vec_cnt = (dev->npa.pci_dev == pci_dev) ? NPA_LF_INT_VEC_POISON + 1 : 0;
+ if (npa_vec_cnt)
+ npa_unregister_irqs(&dev->npa);
+
+ dev_mbox_unregister_irq(pci_dev, dev);
+ if (!dev_is_vf(dev))
+ dev_vf_flr_unregister_irqs(pci_dev, dev);
/* Re-configure to include SSO vectors */
rc = dev_irq_reconfigure(pci_dev->intr_handle, mbox_vec_cnt + npa_vec_cnt + sso_vec_cnt);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.144644499 +0800
+++ 0043-common-cnxk-fix-IRQ-reconfiguration.patch 2024-11-11 14:23:05.102192840 +0800
@@ -1 +1 @@
-From 758b58f06a43564f435e3ecc1a8af994564a6b6b Mon Sep 17 00:00:00 2001
+From e1e6e73a44dcd1e62731ed963bdd3b15fe1b7463 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 758b58f06a43564f435e3ecc1a8af994564a6b6b ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 26aa35894b..c905d35ea6 100644
+index 35eb8b7628..793d78fdbc 100644
@@ -23,2 +25,2 @@
-@@ -1047,8 +1047,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
- dev_irq_unregister(intr_handle, roc_pf_vf_mbox_irq, dev, RVU_VF_INT_VEC_MBOX);
+@@ -947,8 +947,8 @@ mbox_unregister_vf_irq(struct plt_pci_device *pci_dev, struct dev *dev)
+ RVU_VF_INT_VEC_MBOX);
@@ -34 +36 @@
-@@ -1126,8 +1126,8 @@ roc_pf_vf_flr_irq(void *param)
+@@ -1026,8 +1026,8 @@ roc_pf_vf_flr_irq(void *param)
@@ -45 +47 @@
-@@ -1143,8 +1143,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
+@@ -1043,8 +1043,6 @@ vf_flr_unregister_irqs(struct plt_pci_device *pci_dev, struct dev *dev)
@@ -54 +56 @@
-@@ -1723,7 +1721,7 @@ thread_fail:
+@@ -1529,7 +1527,7 @@ thread_fail:
@@ -63 +65 @@
-@@ -1761,10 +1759,10 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
+@@ -1565,10 +1563,10 @@ dev_fini(struct dev *dev, struct plt_pci_device *pci_dev)
@@ -70 +72 @@
- if (!dev_is_vf(dev)) {
+ if (!dev_is_vf(dev))
@@ -73,3 +75,3 @@
- /* Releasing memory allocated for mbox region */
- if (dev->vf_mbox_mz)
- plt_memzone_free(dev->vf_mbox_mz);
+ /* Release PF - VF */
+ mbox = &dev->mbox_vfpf;
+ if (mbox->hwbase && mbox->dev)
@@ -77 +79 @@
-index 434e165b56..5ab4f72f8f 100644
+index 5b2c5096f8..f1fa498dc1 100644
@@ -80 +82 @@
-@@ -170,6 +170,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle);
+@@ -128,6 +128,8 @@ int dev_irqs_disable(struct plt_intr_handle *intr_handle);
@@ -90 +92 @@
-index 499f93e373..2e3b134bfc 100644
+index b02c9c7f38..14cdf14554 100644
@@ -93 +95 @@
-@@ -842,7 +842,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, uint16_t sso_vec_cnt)
+@@ -765,7 +765,14 @@ sso_update_msix_vec_count(struct roc_sso *roc_sso, uint16_t sso_vec_cnt)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'baseband/acc: fix access to deallocated mem' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (42 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
` (76 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Hernan Vargas; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=25491476c647f6dac991948667d804e334b39aa9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 25491476c647f6dac991948667d804e334b39aa9 Mon Sep 17 00:00:00 2001
From: Hernan Vargas <hernan.vargas@intel.com>
Date: Wed, 9 Oct 2024 14:12:51 -0700
Subject: [PATCH] baseband/acc: fix access to deallocated mem
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a090b8ffe73ed21d54e17e5d5711d2e817d7229e ]
Prevent op_addr access during queue_stop operation, as this memory may
have been deallocated.
Fixes: e640f6cdfa84 ("baseband/acc200: add LDPC processing")
Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/baseband/acc/rte_acc100_pmd.c | 36 ----------------------
drivers/baseband/acc/rte_vrb_pmd.c | 44 +--------------------------
2 files changed, 1 insertion(+), 79 deletions(-)
diff --git a/drivers/baseband/acc/rte_acc100_pmd.c b/drivers/baseband/acc/rte_acc100_pmd.c
index 9d028f0f48..3e135c480d 100644
--- a/drivers/baseband/acc/rte_acc100_pmd.c
+++ b/drivers/baseband/acc/rte_acc100_pmd.c
@@ -838,51 +838,15 @@ free_q:
return ret;
}
-static inline void
-acc100_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
- uint16_t index)
-{
- if (op == NULL)
- return;
- if (op_type == RTE_BBDEV_OP_LDPC_DEC)
- rte_bbdev_log(DEBUG,
- " Op 5GUL %d %d %d %d %d %d %d %d %d %d %d %d",
- index,
- op->ldpc_dec.basegraph, op->ldpc_dec.z_c,
- op->ldpc_dec.n_cb, op->ldpc_dec.q_m,
- op->ldpc_dec.n_filler, op->ldpc_dec.cb_params.e,
- op->ldpc_dec.op_flags, op->ldpc_dec.rv_index,
- op->ldpc_dec.iter_max, op->ldpc_dec.iter_count,
- op->ldpc_dec.harq_combined_input.length
- );
- else if (op_type == RTE_BBDEV_OP_LDPC_ENC) {
- struct rte_bbdev_enc_op *op_dl = (struct rte_bbdev_enc_op *) op;
- rte_bbdev_log(DEBUG,
- " Op 5GDL %d %d %d %d %d %d %d %d %d",
- index,
- op_dl->ldpc_enc.basegraph, op_dl->ldpc_enc.z_c,
- op_dl->ldpc_enc.n_cb, op_dl->ldpc_enc.q_m,
- op_dl->ldpc_enc.n_filler, op_dl->ldpc_enc.cb_params.e,
- op_dl->ldpc_enc.op_flags, op_dl->ldpc_enc.rv_index
- );
- }
-}
-
static int
acc100_queue_stop(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc_queue *q;
- struct rte_bbdev_dec_op *op;
- uint16_t i;
q = dev->data->queues[queue_id].queue_private;
rte_bbdev_log(INFO, "Queue Stop %d H/T/D %d %d %x OpType %d",
queue_id, q->sw_ring_head, q->sw_ring_tail,
q->sw_ring_depth, q->op_type);
- for (i = 0; i < q->sw_ring_depth; ++i) {
- op = (q->ring_addr + i)->req.op_addr;
- acc100_print_op(op, q->op_type, i);
- }
/* ignore all operations in flight and clear counters */
q->sw_ring_tail = q->sw_ring_head;
q->aq_enqueued = 0;
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 88e1d03ebf..2ff3f313cb 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1047,58 +1047,16 @@ free_q:
return ret;
}
-static inline void
-vrb_print_op(struct rte_bbdev_dec_op *op, enum rte_bbdev_op_type op_type,
- uint16_t index)
-{
- if (op == NULL)
- return;
- if (op_type == RTE_BBDEV_OP_LDPC_DEC)
- rte_bbdev_log(INFO,
- " Op 5GUL %d %d %d %d %d %d %d %d %d %d %d %d",
- index,
- op->ldpc_dec.basegraph, op->ldpc_dec.z_c,
- op->ldpc_dec.n_cb, op->ldpc_dec.q_m,
- op->ldpc_dec.n_filler, op->ldpc_dec.cb_params.e,
- op->ldpc_dec.op_flags, op->ldpc_dec.rv_index,
- op->ldpc_dec.iter_max, op->ldpc_dec.iter_count,
- op->ldpc_dec.harq_combined_input.length
- );
- else if (op_type == RTE_BBDEV_OP_LDPC_ENC) {
- struct rte_bbdev_enc_op *op_dl = (struct rte_bbdev_enc_op *) op;
- rte_bbdev_log(INFO,
- " Op 5GDL %d %d %d %d %d %d %d %d %d",
- index,
- op_dl->ldpc_enc.basegraph, op_dl->ldpc_enc.z_c,
- op_dl->ldpc_enc.n_cb, op_dl->ldpc_enc.q_m,
- op_dl->ldpc_enc.n_filler, op_dl->ldpc_enc.cb_params.e,
- op_dl->ldpc_enc.op_flags, op_dl->ldpc_enc.rv_index
- );
- } else if (op_type == RTE_BBDEV_OP_MLDTS) {
- struct rte_bbdev_mldts_op *op_mldts = (struct rte_bbdev_mldts_op *) op;
- rte_bbdev_log(INFO, " Op MLD %d RBs %d NL %d Rp %d %d %x",
- index,
- op_mldts->mldts.num_rbs, op_mldts->mldts.num_layers,
- op_mldts->mldts.r_rep,
- op_mldts->mldts.c_rep, op_mldts->mldts.op_flags);
- }
-}
-
/* Stop queue and clear counters. */
static int
vrb_queue_stop(struct rte_bbdev *dev, uint16_t queue_id)
{
struct acc_queue *q;
- struct rte_bbdev_dec_op *op;
- uint16_t i;
+
q = dev->data->queues[queue_id].queue_private;
rte_bbdev_log(INFO, "Queue Stop %d H/T/D %d %d %x OpType %d",
queue_id, q->sw_ring_head, q->sw_ring_tail,
q->sw_ring_depth, q->op_type);
- for (i = 0; i < q->sw_ring_depth; ++i) {
- op = (q->ring_addr + i)->req.op_addr;
- vrb_print_op(op, q->op_type, i);
- }
/* ignore all operations in flight and clear counters */
q->sw_ring_tail = q->sw_ring_head;
q->aq_enqueued = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.188469798 +0800
+++ 0044-baseband-acc-fix-access-to-deallocated-mem.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From a090b8ffe73ed21d54e17e5d5711d2e817d7229e Mon Sep 17 00:00:00 2001
+From 25491476c647f6dac991948667d804e334b39aa9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a090b8ffe73ed21d54e17e5d5711d2e817d7229e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 5e6ee85e13..c690d1492b 100644
+index 9d028f0f48..3e135c480d 100644
@@ -76 +78 @@
-index 646c12ad5c..e3f98d6e42 100644
+index 88e1d03ebf..2ff3f313cb 100644
@@ -79 +81 @@
-@@ -1048,58 +1048,16 @@ free_q:
+@@ -1047,58 +1047,16 @@ free_q:
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'baseband/acc: fix soft output bypass RM' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (43 preceding siblings ...)
2024-11-11 6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
` (75 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Hernan Vargas; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9a9eee381e03a93ff8c0380d2d6e5aa5125a06e9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9a9eee381e03a93ff8c0380d2d6e5aa5125a06e9 Mon Sep 17 00:00:00 2001
From: Hernan Vargas <hernan.vargas@intel.com>
Date: Wed, 9 Oct 2024 14:12:52 -0700
Subject: [PATCH] baseband/acc: fix soft output bypass RM
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2fd167b61bc6c6f40a6c04085caa56be40451e2a ]
Removing soft output bypass RM capability due to VRB2 device
limitations.
Fixes: b49fe052f9cd ("baseband/acc: add FEC capabilities for VRB2 variant")
Signed-off-by: Hernan Vargas <hernan.vargas@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/baseband/acc/rte_vrb_pmd.c | 7 +++----
1 file changed, 3 insertions(+), 4 deletions(-)
diff --git a/drivers/baseband/acc/rte_vrb_pmd.c b/drivers/baseband/acc/rte_vrb_pmd.c
index 2ff3f313cb..4979bb8cec 100644
--- a/drivers/baseband/acc/rte_vrb_pmd.c
+++ b/drivers/baseband/acc/rte_vrb_pmd.c
@@ -1270,7 +1270,6 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
RTE_BBDEV_LDPC_HARQ_4BIT_COMPRESSION |
RTE_BBDEV_LDPC_LLR_COMPRESSION |
RTE_BBDEV_LDPC_SOFT_OUT_ENABLE |
- RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS |
RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS |
RTE_BBDEV_LDPC_DEC_INTERRUPTS,
.llr_size = 8,
@@ -1584,18 +1583,18 @@ vrb_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
fcw->so_en = check_bit(op->ldpc_dec.op_flags, RTE_BBDEV_LDPC_SOFT_OUT_ENABLE);
fcw->so_bypass_intlv = check_bit(op->ldpc_dec.op_flags,
RTE_BBDEV_LDPC_SOFT_OUT_DEINTERLEAVER_BYPASS);
- fcw->so_bypass_rm = check_bit(op->ldpc_dec.op_flags,
- RTE_BBDEV_LDPC_SOFT_OUT_RM_BYPASS);
+ fcw->so_bypass_rm = 0;
fcw->minsum_offset = 1;
fcw->dec_llrclip = 2;
}
/*
- * These are all implicitly set
+ * These are all implicitly set:
* fcw->synd_post = 0;
* fcw->dec_convllr = 0;
* fcw->hcout_convllr = 0;
* fcw->hcout_size1 = 0;
+ * fcw->so_it = 0;
* fcw->hcout_offset = 0;
* fcw->negstop_th = 0;
* fcw->negstop_it = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.246393997 +0800
+++ 0045-baseband-acc-fix-soft-output-bypass-RM.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From 2fd167b61bc6c6f40a6c04085caa56be40451e2a Mon Sep 17 00:00:00 2001
+From 9a9eee381e03a93ff8c0380d2d6e5aa5125a06e9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2fd167b61bc6c6f40a6c04085caa56be40451e2a ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index e3f98d6e42..52a683e4e4 100644
+index 2ff3f313cb..4979bb8cec 100644
@@ -22 +24 @@
-@@ -1272,7 +1272,6 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
+@@ -1270,7 +1270,6 @@ vrb_dev_info_get(struct rte_bbdev *dev, struct rte_bbdev_driver_info *dev_info)
@@ -30 +32 @@
-@@ -1643,18 +1642,18 @@ vrb_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
+@@ -1584,18 +1583,18 @@ vrb_fcw_ld_fill(struct rte_bbdev_dec_op *op, struct acc_fcw_ld *fcw,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'vhost: fix offset while mapping log base address' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (44 preceding siblings ...)
2024-11-11 6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
` (74 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bill Xiang; +Cc: xuemingl, Chenbo Xia, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1f2330c670d1bedb5e54a790eda3874d402ab88d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1f2330c670d1bedb5e54a790eda3874d402ab88d Mon Sep 17 00:00:00 2001
From: Bill Xiang <xiangwencheng@dayudpu.com>
Date: Mon, 8 Jul 2024 14:57:49 +0800
Subject: [PATCH] vhost: fix offset while mapping log base address
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit bdd96d8ac76ca412165b2d1bbd3701e978246d8e ]
For sanity the offset should be the last parameter of mmap.
Fixes: fbc4d248b198 ("vhost: fix offset while mmaping log base address")
Signed-off-by: Bill Xiang <xiangwencheng@dayudpu.com>
Reviewed-by: Chenbo Xia <chenbox@nvidia.com>
---
.mailmap | 1 +
lib/vhost/vhost_user.c | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 4022645615..f4d2a72009 100644
--- a/.mailmap
+++ b/.mailmap
@@ -177,6 +177,7 @@ Bert van Leeuwen <bert.vanleeuwen@netronome.com>
Bhagyada Modali <bhagyada.modali@amd.com>
Bharat Mota <bmota@vmware.com>
Bill Hong <bhong@brocade.com>
+Bill Xiang <xiangwencheng@dayudpu.com>
Billy McFall <bmcfall@redhat.com>
Billy O'Mahony <billy.o.mahony@intel.com>
Bing Zhao <bingz@nvidia.com> <bingz@mellanox.com> <bing.zhao@hxt-semitech.com>
diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c
index f8e42dd619..5b6c90437c 100644
--- a/lib/vhost/vhost_user.c
+++ b/lib/vhost/vhost_user.c
@@ -2328,7 +2328,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
* mmap from 0 to workaround a hugepage mmap bug: mmap will
* fail when offset is not page size aligned.
*/
- addr = mmap(0, size + off, PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
+ addr = mmap(0, size, PROT_READ | PROT_WRITE, MAP_SHARED, fd, off);
alignment = get_blk_size(fd);
close(fd);
if (addr == MAP_FAILED) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.286278496 +0800
+++ 0046-vhost-fix-offset-while-mapping-log-base-address.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From bdd96d8ac76ca412165b2d1bbd3701e978246d8e Mon Sep 17 00:00:00 2001
+From 1f2330c670d1bedb5e54a790eda3874d402ab88d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit bdd96d8ac76ca412165b2d1bbd3701e978246d8e ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index ed4ea17c4c..544e62df7d 100644
+index 4022645615..f4d2a72009 100644
@@ -22,3 +24,3 @@
-@@ -183,6 +183,7 @@ Bhagyada Modali <bhagyada.modali@amd.com>
- Bharat Mota <bharat.mota@broadcom.com> <bmota@vmware.com>
- Bhuvan Mital <bhuvan.mital@amd.com>
+@@ -177,6 +177,7 @@ Bert van Leeuwen <bert.vanleeuwen@netronome.com>
+ Bhagyada Modali <bhagyada.modali@amd.com>
+ Bharat Mota <bmota@vmware.com>
@@ -31 +33 @@
-index 5f470da38a..0893ae80bb 100644
+index f8e42dd619..5b6c90437c 100644
@@ -34 +36 @@
-@@ -2399,7 +2399,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
+@@ -2328,7 +2328,7 @@ vhost_user_set_log_base(struct virtio_net **pdev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'vdpa: update used flags in used ring relay' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (45 preceding siblings ...)
2024-11-11 6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
` (73 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bill Xiang; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=23b4bf7939c8bc38ed6d132c27cdb6905a9acaf7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 23b4bf7939c8bc38ed6d132c27cdb6905a9acaf7 Mon Sep 17 00:00:00 2001
From: Bill Xiang <xiangwencheng@dayudpu.com>
Date: Wed, 17 Jul 2024 11:24:47 +0800
Subject: [PATCH] vdpa: update used flags in used ring relay
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b3f923fe1710e448c073f03aad2c087ffb6c7a5c ]
The vDPA device will work incorrectly if flags such as
VRING_USED_F_NO_NOTIFY are not updated correctly.
Fixes: b13ad2decc83 ("vhost: provide helpers for virtio ring relay")
Signed-off-by: Bill Xiang <xiangwencheng@dayudpu.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/vdpa.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/lib/vhost/vdpa.c b/lib/vhost/vdpa.c
index ce4fb09859..f9730d0685 100644
--- a/lib/vhost/vdpa.c
+++ b/lib/vhost/vdpa.c
@@ -174,6 +174,7 @@ rte_vdpa_relay_vring_used(int vid, uint16_t qid, void *vring_m)
idx = vq->used->idx;
idx_m = s_vring->used->idx;
ret = (uint16_t)(idx_m - idx);
+ vq->used->flags = s_vring->used->flags;
while (idx != idx_m) {
/* copy used entry, used ring logging is not covered here */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.331825395 +0800
+++ 0047-vdpa-update-used-flags-in-used-ring-relay.patch 2024-11-11 14:23:05.112192840 +0800
@@ -1 +1 @@
-From b3f923fe1710e448c073f03aad2c087ffb6c7a5c Mon Sep 17 00:00:00 2001
+From 23b4bf7939c8bc38ed6d132c27cdb6905a9acaf7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b3f923fe1710e448c073f03aad2c087ffb6c7a5c ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index a1dd5a753b..8abb073675 100644
+index ce4fb09859..f9730d0685 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'vdpa/nfp: fix hardware initialization' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (46 preceding siblings ...)
2024-11-11 6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
` (72 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Xinying Yu
Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d0688a90529e682fd211f991e7a0b7ceb899993e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d0688a90529e682fd211f991e7a0b7ceb899993e Mon Sep 17 00:00:00 2001
From: Xinying Yu <xinying.yu@corigine.com>
Date: Mon, 5 Aug 2024 10:12:39 +0800
Subject: [PATCH] vdpa/nfp: fix hardware initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fc470d5e88f848957b8f6d2089210254525e9e13 ]
Reconfigure the NIC will fail because lack of the
initialization logic of queue configuration pointer.
Fix this by adding the correct initialization logic.
Fixes: d89f4990c14e ("vdpa/nfp: add hardware init")
Signed-off-by: Xinying Yu <xinying.yu@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
.mailmap | 1 +
drivers/vdpa/nfp/nfp_vdpa_core.c | 9 +++++++++
2 files changed, 10 insertions(+)
diff --git a/.mailmap b/.mailmap
index f4d2a72009..a72dce1a61 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1601,6 +1601,7 @@ Xieming Katty <katty.xieming@huawei.com>
Xinfeng Zhao <xinfengx.zhao@intel.com>
Xingguang He <xingguang.he@intel.com>
Xingyou Chen <niatlantice@gmail.com>
+Xinying Yu <xinying.yu@corigine.com>
Xin Long <longxin.xl@alibaba-inc.com>
Xi Zhang <xix.zhang@intel.com>
Xuan Ding <xuan.ding@intel.com>
diff --git a/drivers/vdpa/nfp/nfp_vdpa_core.c b/drivers/vdpa/nfp/nfp_vdpa_core.c
index 7b877605e4..291798196c 100644
--- a/drivers/vdpa/nfp/nfp_vdpa_core.c
+++ b/drivers/vdpa/nfp/nfp_vdpa_core.c
@@ -55,7 +55,10 @@ nfp_vdpa_hw_init(struct nfp_vdpa_hw *vdpa_hw,
struct rte_pci_device *pci_dev)
{
uint32_t queue;
+ uint8_t *tx_bar;
+ uint32_t start_q;
struct nfp_hw *hw;
+ uint32_t tx_bar_off;
uint8_t *notify_base;
hw = &vdpa_hw->super;
@@ -82,6 +85,12 @@ nfp_vdpa_hw_init(struct nfp_vdpa_hw *vdpa_hw,
idx + 1, vdpa_hw->notify_addr[idx + 1]);
}
+ /* NFP vDPA cfg queue setup */
+ start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ);
+ tx_bar_off = start_q * NFP_QCP_QUEUE_ADDR_SZ;
+ tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;
+ hw->qcp_cfg = tx_bar + NFP_QCP_QUEUE_ADDR_SZ;
+
vdpa_hw->features = (1ULL << VIRTIO_F_VERSION_1) |
(1ULL << VIRTIO_F_IN_ORDER) |
(1ULL << VHOST_USER_F_PROTOCOL_FEATURES);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.394025494 +0800
+++ 0048-vdpa-nfp-fix-hardware-initialization.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From fc470d5e88f848957b8f6d2089210254525e9e13 Mon Sep 17 00:00:00 2001
+From d0688a90529e682fd211f991e7a0b7ceb899993e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fc470d5e88f848957b8f6d2089210254525e9e13 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 544e62df7d..f51b1dda5d 100644
+index f4d2a72009..a72dce1a61 100644
@@ -27 +29 @@
-@@ -1655,6 +1655,7 @@ Xieming Katty <katty.xieming@huawei.com>
+@@ -1601,6 +1601,7 @@ Xieming Katty <katty.xieming@huawei.com>
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'vdpa/nfp: fix reconfiguration' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (47 preceding siblings ...)
2024-11-11 6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
` (71 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Xinying Yu
Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a46c4c1b435b99d9b5a3a2a9867b452cf42abd49
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a46c4c1b435b99d9b5a3a2a9867b452cf42abd49 Mon Sep 17 00:00:00 2001
From: Xinying Yu <xinying.yu@corigine.com>
Date: Mon, 5 Aug 2024 10:12:40 +0800
Subject: [PATCH] vdpa/nfp: fix reconfiguration
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d149827203a61da4c8c9e4a13e07bb0260438124 ]
The ctrl words of vDPA is located on the extended word, so it
should use the 'nfp_ext_reconfig()' rather than 'nfp_reconfig()'.
Also replace the misuse of 'NFP_NET_CFG_CTRL_SCATTER' macro
with 'NFP_NET_CFG_CTRL_VIRTIO'.
Fixes: b47a0373903f ("vdpa/nfp: add datapath update")
Signed-off-by: Xinying Yu <xinying.yu@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/common/nfp/nfp_common_ctrl.h | 1 +
drivers/vdpa/nfp/nfp_vdpa_core.c | 16 ++++++++++++----
2 files changed, 13 insertions(+), 4 deletions(-)
diff --git a/drivers/common/nfp/nfp_common_ctrl.h b/drivers/common/nfp/nfp_common_ctrl.h
index d09fd2b892..532bc6584a 100644
--- a/drivers/common/nfp/nfp_common_ctrl.h
+++ b/drivers/common/nfp/nfp_common_ctrl.h
@@ -223,6 +223,7 @@ struct nfp_net_fw_ver {
#define NFP_NET_CFG_CTRL_IPSEC_SM_LOOKUP (0x1 << 3) /**< SA short match lookup */
#define NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP (0x1 << 4) /**< SA long match lookup */
#define NFP_NET_CFG_CTRL_MULTI_PF (0x1 << 5)
+#define NFP_NET_CFG_CTRL_VIRTIO (0x1 << 10) /**< Virtio offload */
#define NFP_NET_CFG_CTRL_IN_ORDER (0x1 << 11) /**< Virtio in-order flag */
#define NFP_NET_CFG_CAP_WORD1 0x00a4
diff --git a/drivers/vdpa/nfp/nfp_vdpa_core.c b/drivers/vdpa/nfp/nfp_vdpa_core.c
index 291798196c..6d07356581 100644
--- a/drivers/vdpa/nfp/nfp_vdpa_core.c
+++ b/drivers/vdpa/nfp/nfp_vdpa_core.c
@@ -101,7 +101,7 @@ nfp_vdpa_hw_init(struct nfp_vdpa_hw *vdpa_hw,
static uint32_t
nfp_vdpa_check_offloads(void)
{
- return NFP_NET_CFG_CTRL_SCATTER |
+ return NFP_NET_CFG_CTRL_VIRTIO |
NFP_NET_CFG_CTRL_IN_ORDER;
}
@@ -112,6 +112,7 @@ nfp_vdpa_hw_start(struct nfp_vdpa_hw *vdpa_hw,
int ret;
uint32_t update;
uint32_t new_ctrl;
+ uint32_t new_ext_ctrl;
struct timespec wait_tst;
struct nfp_hw *hw = &vdpa_hw->super;
uint8_t mac_addr[RTE_ETHER_ADDR_LEN];
@@ -131,8 +132,6 @@ nfp_vdpa_hw_start(struct nfp_vdpa_hw *vdpa_hw,
nfp_disable_queues(hw);
nfp_enable_queues(hw, NFP_VDPA_MAX_QUEUES, NFP_VDPA_MAX_QUEUES);
- new_ctrl = nfp_vdpa_check_offloads();
-
nn_cfg_writel(hw, NFP_NET_CFG_MTU, 9216);
nn_cfg_writel(hw, NFP_NET_CFG_FLBUFSZ, 10240);
@@ -147,8 +146,17 @@ nfp_vdpa_hw_start(struct nfp_vdpa_hw *vdpa_hw,
/* Writing new MAC to the specific port BAR address */
nfp_write_mac(hw, (uint8_t *)mac_addr);
+ new_ext_ctrl = nfp_vdpa_check_offloads();
+
+ update = NFP_NET_CFG_UPDATE_GEN;
+ ret = nfp_ext_reconfig(hw, new_ext_ctrl, update);
+ if (ret != 0)
+ return -EIO;
+
+ hw->ctrl_ext = new_ext_ctrl;
+
/* Enable device */
- new_ctrl |= NFP_NET_CFG_CTRL_ENABLE;
+ new_ctrl = NFP_NET_CFG_CTRL_ENABLE;
/* Signal the NIC about the change */
update = NFP_NET_CFG_UPDATE_MACADDR |
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.484732592 +0800
+++ 0049-vdpa-nfp-fix-reconfiguration.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From d149827203a61da4c8c9e4a13e07bb0260438124 Mon Sep 17 00:00:00 2001
+From a46c4c1b435b99d9b5a3a2a9867b452cf42abd49 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d149827203a61da4c8c9e4a13e07bb0260438124 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 69596dd6f5..1b30f81fdb 100644
+index d09fd2b892..532bc6584a 100644
@@ -29 +31,2 @@
-@@ -205,6 +205,7 @@ struct nfp_net_fw_ver {
+@@ -223,6 +223,7 @@ struct nfp_net_fw_ver {
+ #define NFP_NET_CFG_CTRL_IPSEC_SM_LOOKUP (0x1 << 3) /**< SA short match lookup */
@@ -32 +34,0 @@
- #define NFP_NET_CFG_CTRL_FLOW_STEER (0x1 << 8) /**< Flow Steering */
@@ -35 +36,0 @@
- #define NFP_NET_CFG_CTRL_USO (0x1 << 16) /**< UDP segmentation offload */
@@ -36,0 +38 @@
+ #define NFP_NET_CFG_CAP_WORD1 0x00a4
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/virtio-user: reset used index counter' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (48 preceding siblings ...)
2024-11-11 6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
` (70 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Kommula Shiva Shankar; +Cc: xuemingl, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9c24c7b819f29a9f8962df3e2a7742d1dfee98a4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9c24c7b819f29a9f8962df3e2a7742d1dfee98a4 Mon Sep 17 00:00:00 2001
From: Kommula Shiva Shankar <kshankar@marvell.com>
Date: Mon, 5 Aug 2024 10:08:41 +0000
Subject: [PATCH] net/virtio-user: reset used index counter
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ff11fc60c5d8d9ae5a0f0114db4c3bc834090548 ]
When the virtio device is reinitialized during ethdev reconfiguration,
all the virtio rings are recreated and repopulated on the device.
Accordingly, reset the used index counter value back to zero.
Fixes: 48a4464029a7 ("net/virtio-user: support control VQ for packed")
Signed-off-by: Kommula Shiva Shankar <kshankar@marvell.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/net/virtio/virtio_user_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c
index 3a31642899..f176df86d4 100644
--- a/drivers/net/virtio/virtio_user_ethdev.c
+++ b/drivers/net/virtio/virtio_user_ethdev.c
@@ -199,6 +199,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
vring->device = (void *)(uintptr_t)used_addr;
dev->packed_queues[queue_idx].avail_wrap_counter = true;
dev->packed_queues[queue_idx].used_wrap_counter = true;
+ dev->packed_queues[queue_idx].used_idx = 0;
for (i = 0; i < vring->num; i++)
vring->desc[i].flags = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.533567591 +0800
+++ 0050-net-virtio-user-reset-used-index-counter.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From ff11fc60c5d8d9ae5a0f0114db4c3bc834090548 Mon Sep 17 00:00:00 2001
+From 9c24c7b819f29a9f8962df3e2a7742d1dfee98a4 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ff11fc60c5d8d9ae5a0f0114db4c3bc834090548 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index bf29f0dacd..747dddeb2e 100644
+index 3a31642899..f176df86d4 100644
@@ -23 +25 @@
-@@ -204,6 +204,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
+@@ -199,6 +199,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'vhost: restrict set max queue pair API to VDUSE' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (49 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
` (69 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Maxime Coquelin; +Cc: xuemingl, Yu Jiang, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e1bd966815e01d4bbac59b413ec35e5ebd0a416c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e1bd966815e01d4bbac59b413ec35e5ebd0a416c Mon Sep 17 00:00:00 2001
From: Maxime Coquelin <maxime.coquelin@redhat.com>
Date: Thu, 3 Oct 2024 10:11:10 +0200
Subject: [PATCH] vhost: restrict set max queue pair API to VDUSE
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e1808999d36bb2e136a649f4651f36030aa468f1 ]
In order to avoid breaking Vhost-user live-migration, we want the
rte_vhost_driver_set_max_queue_num API to only be effective with
VDUSE.
Furthermore, this API is only really needed for VDUSE where the
device number of queues is defined by the backend. For Vhost-user,
this is defined by the frontend (e.g. QEMU), so the advantages of
restricting more the maximum number of queue pairs is limited to
a small memory gain (a handful of pointers).
Fixes: 4aa1f88ac13d ("vhost: add API to set max queue pairs")
Reported-by: Yu Jiang <yux.jiang@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Acked-by: David Marchand <david.marchand@redhat.com>
---
lib/vhost/rte_vhost.h | 2 ++
lib/vhost/socket.c | 11 +++++++++++
2 files changed, 13 insertions(+)
diff --git a/lib/vhost/rte_vhost.h b/lib/vhost/rte_vhost.h
index db92f05344..c6dba67a67 100644
--- a/lib/vhost/rte_vhost.h
+++ b/lib/vhost/rte_vhost.h
@@ -613,6 +613,8 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num);
* @b EXPERIMENTAL: this API may change, or be removed, without prior notice.
*
* Set the maximum number of queue pairs supported by the device.
+ * The value set is ignored for Vhost-user backends. It is only taken into
+ * account with VDUSE backends.
*
* @param path
* The vhost-user socket file path
diff --git a/lib/vhost/socket.c b/lib/vhost/socket.c
index 0b95c54c5b..ffb8518e74 100644
--- a/lib/vhost/socket.c
+++ b/lib/vhost/socket.c
@@ -865,6 +865,17 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
goto unlock_exit;
}
+ /*
+ * This is only useful for VDUSE for which number of virtqueues is set
+ * by the backend. For Vhost-user, the number of virtqueues is defined
+ * by the frontend.
+ */
+ if (!vsocket->is_vduse) {
+ VHOST_LOG_CONFIG(path, DEBUG, "Keeping %u max queue pairs for Vhost-user backend\n",
+ VHOST_MAX_QUEUE_PAIRS);
+ goto unlock_exit;
+ }
+
vsocket->max_queue_pairs = max_queue_pairs;
unlock_exit:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.569998190 +0800
+++ 0051-vhost-restrict-set-max-queue-pair-API-to-VDUSE.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From e1808999d36bb2e136a649f4651f36030aa468f1 Mon Sep 17 00:00:00 2001
+From e1bd966815e01d4bbac59b413ec35e5ebd0a416c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e1808999d36bb2e136a649f4651f36030aa468f1 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -24,2 +26,2 @@
- lib/vhost/socket.c | 12 ++++++++++++
- 2 files changed, 14 insertions(+)
+ lib/vhost/socket.c | 11 +++++++++++
+ 2 files changed, 13 insertions(+)
@@ -28 +30 @@
-index c7a5f56df8..1a91a00f02 100644
+index db92f05344..c6dba67a67 100644
@@ -31 +33 @@
-@@ -614,6 +614,8 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num);
+@@ -613,6 +613,8 @@ rte_vhost_driver_get_queue_num(const char *path, uint32_t *queue_num);
@@ -41 +43 @@
-index a75728a2e4..d29d15494c 100644
+index 0b95c54c5b..ffb8518e74 100644
@@ -44 +46 @@
-@@ -860,6 +860,18 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
+@@ -865,6 +865,17 @@ rte_vhost_driver_set_max_queue_num(const char *path, uint32_t max_queue_pairs)
@@ -54,3 +56,2 @@
-+ VHOST_CONFIG_LOG(path, DEBUG,
-+ "Keeping %u max queue pairs for Vhost-user backend",
-+ VHOST_MAX_QUEUE_PAIRS);
++ VHOST_LOG_CONFIG(path, DEBUG, "Keeping %u max queue pairs for Vhost-user backend\n",
++ VHOST_MAX_QUEUE_PAIRS);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'fib: fix AVX512 lookup' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (50 preceding siblings ...)
2024-11-11 6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
` (68 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c098add6c3bd7d23d304d79c1ce8c3e76368c622
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c098add6c3bd7d23d304d79c1ce8c3e76368c622 Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Fri, 6 Sep 2024 17:04:36 +0000
Subject: [PATCH] fib: fix AVX512 lookup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 66ed1786ad067198814e9b2ab54f0cad68a58f1e ]
Vector lookup uses gather instructions which loads data in 4byte chunks.
This could lead to out of bounds access at the end of the tbl24 in case
of 1 or 2 byte entries if e.g. lookup is attempted for 255.255.255.255
in IPv4 case.
This patch fixes potential out of bound access by gather instruction
allocating an extra 4 byte in the end of the tbl24.
Fixes: b3509fa3653e ("fib: add AVX512 lookup")
Fixes: 1e5630e40d95 ("fib6: add AVX512 lookup")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
lib/fib/dir24_8.c | 4 ++--
lib/fib/trie.c | 4 ++--
2 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/fib/dir24_8.c b/lib/fib/dir24_8.c
index c739e92304..07c324743b 100644
--- a/lib/fib/dir24_8.c
+++ b/lib/fib/dir24_8.c
@@ -526,8 +526,8 @@ dir24_8_create(const char *name, int socket_id, struct rte_fib_conf *fib_conf)
snprintf(mem_name, sizeof(mem_name), "DP_%s", name);
dp = rte_zmalloc_socket(name, sizeof(struct dir24_8_tbl) +
- DIR24_8_TBL24_NUM_ENT * (1 << nh_sz), RTE_CACHE_LINE_SIZE,
- socket_id);
+ DIR24_8_TBL24_NUM_ENT * (1 << nh_sz) + sizeof(uint32_t),
+ RTE_CACHE_LINE_SIZE, socket_id);
if (dp == NULL) {
rte_errno = ENOMEM;
return NULL;
diff --git a/lib/fib/trie.c b/lib/fib/trie.c
index 7b33cdaa7b..ca1c2fe3bc 100644
--- a/lib/fib/trie.c
+++ b/lib/fib/trie.c
@@ -647,8 +647,8 @@ trie_create(const char *name, int socket_id,
snprintf(mem_name, sizeof(mem_name), "DP_%s", name);
dp = rte_zmalloc_socket(name, sizeof(struct rte_trie_tbl) +
- TRIE_TBL24_NUM_ENT * (1 << nh_sz), RTE_CACHE_LINE_SIZE,
- socket_id);
+ TRIE_TBL24_NUM_ENT * (1 << nh_sz) + sizeof(uint32_t),
+ RTE_CACHE_LINE_SIZE, socket_id);
if (dp == NULL) {
rte_errno = ENOMEM;
return dp;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.607431989 +0800
+++ 0052-fib-fix-AVX512-lookup.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From 66ed1786ad067198814e9b2ab54f0cad68a58f1e Mon Sep 17 00:00:00 2001
+From c098add6c3bd7d23d304d79c1ce8c3e76368c622 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 66ed1786ad067198814e9b2ab54f0cad68a58f1e ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/e1000: fix link status crash in secondary process' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (51 preceding siblings ...)
2024-11-11 6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
` (67 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Jun Wang; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=be22d7ff5d84de7805ebe4e4db3993114ea16408
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From be22d7ff5d84de7805ebe4e4db3993114ea16408 Mon Sep 17 00:00:00 2001
From: Jun Wang <junwang01@cestc.cn>
Date: Fri, 12 Jul 2024 19:30:47 +0800
Subject: [PATCH] net/e1000: fix link status crash in secondary process
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 84506cfe07326fd6ddb158f3fa57bd678751561a ]
The code to update link status is not safe in secondary process.
If called from secondary it will crash, example from dumpcap:
/dpdk/app/dpdk-dumpcap -i 0000:00:04.0
File: /tmp/dpdk-dumpcap_0_0000:00:04.0_20240723020203.pcapng
Segmentation fault (core dumped)
Fixes: 805803445a02 ("e1000: support EM devices (also known as e1000/e1000e)")
Signed-off-by: Jun Wang <junwang01@cestc.cn>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/e1000/em_ethdev.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/e1000/em_ethdev.c b/drivers/net/e1000/em_ethdev.c
index c5a4dec693..f6875b0762 100644
--- a/drivers/net/e1000/em_ethdev.c
+++ b/drivers/net/e1000/em_ethdev.c
@@ -1136,6 +1136,9 @@ eth_em_link_update(struct rte_eth_dev *dev, int wait_to_complete)
struct rte_eth_link link;
int link_up, count;
+ if (rte_eal_process_type() != RTE_PROC_PRIMARY)
+ return -1;
+
link_up = 0;
hw->mac.get_link_status = 1;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.638718189 +0800
+++ 0053-net-e1000-fix-link-status-crash-in-secondary-process.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From 84506cfe07326fd6ddb158f3fa57bd678751561a Mon Sep 17 00:00:00 2001
+From be22d7ff5d84de7805ebe4e4db3993114ea16408 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 84506cfe07326fd6ddb158f3fa57bd678751561a ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cpfl: add checks for flow action types' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (52 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
` (66 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Praveen Shetty; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fa3204c299911a82fbcab1b3fac2adf8813e1621
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fa3204c299911a82fbcab1b3fac2adf8813e1621 Mon Sep 17 00:00:00 2001
From: Praveen Shetty <praveen.shetty@intel.com>
Date: Tue, 30 Jul 2024 11:45:40 +0000
Subject: [PATCH] net/cpfl: add checks for flow action types
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 86126195768418da56031305cdf3636ceb6650c8 ]
In CPFL PMD, port_representor action is used for the local vport and
represented_port action is used for the remote port (remote port in this
case is either the idpf pf or the vf port that is being represented by
the cpfl pmd). Using these the other way around is an error, so add a
check for those cases.
Fixes: 441e777b85f1 ("net/cpfl: support represented port action")
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/cpfl/cpfl_flow_engine_fxp.c | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/drivers/net/cpfl/cpfl_flow_engine_fxp.c b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
index f6bd1f7599..9101a0e506 100644
--- a/drivers/net/cpfl/cpfl_flow_engine_fxp.c
+++ b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
@@ -292,6 +292,17 @@ cpfl_fxp_parse_action(struct cpfl_itf *itf,
is_vsi = (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR ||
dst_itf->type == CPFL_ITF_TYPE_REPRESENTOR);
+ /* Added checks to throw an error for the invalid action types. */
+ if (action_type == RTE_FLOW_ACTION_TYPE_PORT_REPRESENTOR &&
+ dst_itf->type == CPFL_ITF_TYPE_REPRESENTOR) {
+ PMD_DRV_LOG(ERR, "Cannot use port_representor action for the represented_port");
+ goto err;
+ }
+ if (action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT &&
+ dst_itf->type == CPFL_ITF_TYPE_VPORT) {
+ PMD_DRV_LOG(ERR, "Cannot use represented_port action for the local vport");
+ goto err;
+ }
if (is_vsi)
dev_id = cpfl_get_vsi_id(dst_itf);
else
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.684731688 +0800
+++ 0054-net-cpfl-add-checks-for-flow-action-types.patch 2024-11-11 14:23:05.122192840 +0800
@@ -1 +1 @@
-From 86126195768418da56031305cdf3636ceb6650c8 Mon Sep 17 00:00:00 2001
+From fa3204c299911a82fbcab1b3fac2adf8813e1621 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 86126195768418da56031305cdf3636ceb6650c8 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index b9e825ef57..2c75ea6577 100644
+index f6bd1f7599..9101a0e506 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/iavf: fix crash when link is unstable' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (53 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
` (65 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Kaiwen Deng; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=97d79944ae2b5d5736b2e7f5e643b4ad6be5fc25
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 97d79944ae2b5d5736b2e7f5e643b4ad6be5fc25 Mon Sep 17 00:00:00 2001
From: Kaiwen Deng <kaiwenx.deng@intel.com>
Date: Tue, 6 Aug 2024 08:35:27 +0800
Subject: [PATCH] net/iavf: fix crash when link is unstable
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 57ed9ca61f44ffc3801f55c749347bd717834008 ]
Physical link instability may cause a core dump because unstable
physical links can result in a large number of link change events. Some
of these events may be captured by vf before vf resources are allocated,
and that will result in a core dump.
This commit will check if vf_res is invalid before dereferencing it.
Fixes: 5e03e316c753 ("net/iavf: handle virtchnl event message without interrupt")
Signed-off-by: Kaiwen Deng <kaiwenx.deng@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_vchnl.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 1111d30f57..8ca104c04e 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -255,8 +255,8 @@ iavf_read_msg_from_pf(struct iavf_adapter *adapter, uint16_t buf_len,
case VIRTCHNL_EVENT_LINK_CHANGE:
vf->link_up =
vpe->event_data.link_event.link_status;
- if (vf->vf_res->vf_cap_flags &
- VIRTCHNL_VF_CAP_ADV_LINK_SPEED) {
+ if (vf->vf_res != NULL &&
+ vf->vf_res->vf_cap_flags & VIRTCHNL_VF_CAP_ADV_LINK_SPEED) {
vf->link_speed =
vpe->event_data.link_event_adv.link_speed;
} else {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.770511586 +0800
+++ 0055-net-iavf-fix-crash-when-link-is-unstable.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From 57ed9ca61f44ffc3801f55c749347bd717834008 Mon Sep 17 00:00:00 2001
+From 97d79944ae2b5d5736b2e7f5e643b4ad6be5fc25 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 57ed9ca61f44ffc3801f55c749347bd717834008 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 6d5969f084..69420bc9b6 100644
+index 1111d30f57..8ca104c04e 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cpfl: fix parsing protocol ID mask field' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (54 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
` (64 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Praveen Shetty; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f2cb061aa1b15f690a56ead9334aab21480c7cf6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f2cb061aa1b15f690a56ead9334aab21480c7cf6 Mon Sep 17 00:00:00 2001
From: Praveen Shetty <praveen.shetty@intel.com>
Date: Fri, 23 Aug 2024 11:14:50 +0000
Subject: [PATCH] net/cpfl: fix parsing protocol ID mask field
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8125fea74b860a71605dfe94dc03ef73c912813e ]
CPFL parser was incorrectly parsing the mask value of the next_proto_id
field from recipe.json file as a string instead of unsigned integer.
Fixes: 41f20298ee8c ("net/cpfl: parse flow offloading hint from JSON")
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/cpfl/cpfl_flow_parser.c | 34 +++++++++++++++++++----------
1 file changed, 22 insertions(+), 12 deletions(-)
diff --git a/drivers/net/cpfl/cpfl_flow_parser.c b/drivers/net/cpfl/cpfl_flow_parser.c
index 303e979015..a67c773d18 100644
--- a/drivers/net/cpfl/cpfl_flow_parser.c
+++ b/drivers/net/cpfl/cpfl_flow_parser.c
@@ -198,6 +198,8 @@ cpfl_flow_js_pattern_key_proto_field(json_t *ob_fields,
for (i = 0; i < len; i++) {
json_t *object;
const char *name, *mask;
+ uint32_t mask_32b = 0;
+ int ret;
object = json_array_get(ob_fields, i);
name = cpfl_json_t_to_string(object, "name");
@@ -213,20 +215,28 @@ cpfl_flow_js_pattern_key_proto_field(json_t *ob_fields,
if (js_field->type == RTE_FLOW_ITEM_TYPE_ETH ||
js_field->type == RTE_FLOW_ITEM_TYPE_IPV4) {
- mask = cpfl_json_t_to_string(object, "mask");
- if (!mask) {
- PMD_DRV_LOG(ERR, "Can not parse string 'mask'.");
- goto err;
- }
- if (strlen(mask) > CPFL_JS_STR_SIZE - 1) {
- PMD_DRV_LOG(ERR, "The 'mask' is too long.");
- goto err;
+ /* Added a check for parsing mask value of the next_proto_id field. */
+ if (strcmp(name, "next_proto_id") == 0) {
+ ret = cpfl_json_t_to_uint32(object, "mask", &mask_32b);
+ if (ret < 0) {
+ PMD_DRV_LOG(ERR, "Cannot parse uint32 'mask'.");
+ goto err;
+ }
+ js_field->fields[i].mask_32b = mask_32b;
+ } else {
+ mask = cpfl_json_t_to_string(object, "mask");
+ if (!mask) {
+ PMD_DRV_LOG(ERR, "Can not parse string 'mask'.");
+ goto err;
+ }
+ if (rte_strscpy(js_field->fields[i].mask,
+ mask, CPFL_JS_STR_SIZE) < 0) {
+ PMD_DRV_LOG(ERR, "The 'mask' is too long.");
+ goto err;
+ }
}
- strncpy(js_field->fields[i].mask, mask, CPFL_JS_STR_SIZE - 1);
- } else {
- uint32_t mask_32b;
- int ret;
+ } else {
ret = cpfl_json_t_to_uint32(object, "mask", &mask_32b);
if (ret < 0) {
PMD_DRV_LOG(ERR, "Can not parse uint32 'mask'.");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.841481184 +0800
+++ 0056-net-cpfl-fix-parsing-protocol-ID-mask-field.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From 8125fea74b860a71605dfe94dc03ef73c912813e Mon Sep 17 00:00:00 2001
+From f2cb061aa1b15f690a56ead9334aab21480c7cf6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8125fea74b860a71605dfe94dc03ef73c912813e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ice/base: fix link speed for 200G' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (55 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
` (63 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Paul Greenwalt; +Cc: xuemingl, Soumyadeep Hore, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=07885f6f163c85a7476a309661084396922b3d5b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 07885f6f163c85a7476a309661084396922b3d5b Mon Sep 17 00:00:00 2001
From: Paul Greenwalt <paul.greenwalt@intel.com>
Date: Fri, 23 Aug 2024 09:56:45 +0000
Subject: [PATCH] net/ice/base: fix link speed for 200G
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e3992ab377d2879d6c5bfb220865638404b85dba ]
When setting PHY configuration during driver initialization, 200G link
speed is not being advertised even when the PHY is capable. This is
because the get PHY capabilities link speed response is being masked by
ICE_AQ_LINK_SPEED_M, which does not include 200G link speed bit.
Fixes: d13ad9cf1721 ("net/ice/base: add helper functions for PHY caching")
Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com>
Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ice/base/ice_adminq_cmd.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/ice/base/ice_adminq_cmd.h b/drivers/net/ice/base/ice_adminq_cmd.h
index 1131379d63..56ba2041f2 100644
--- a/drivers/net/ice/base/ice_adminq_cmd.h
+++ b/drivers/net/ice/base/ice_adminq_cmd.h
@@ -1621,7 +1621,7 @@ struct ice_aqc_get_link_status_data {
#define ICE_AQ_LINK_PWR_QSFP_CLASS_3 2
#define ICE_AQ_LINK_PWR_QSFP_CLASS_4 3
__le16 link_speed;
-#define ICE_AQ_LINK_SPEED_M 0x7FF
+#define ICE_AQ_LINK_SPEED_M 0xFFF
#define ICE_AQ_LINK_SPEED_10MB BIT(0)
#define ICE_AQ_LINK_SPEED_100MB BIT(1)
#define ICE_AQ_LINK_SPEED_1000MB BIT(2)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.904353783 +0800
+++ 0057-net-ice-base-fix-link-speed-for-200G.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From e3992ab377d2879d6c5bfb220865638404b85dba Mon Sep 17 00:00:00 2001
+From 07885f6f163c85a7476a309661084396922b3d5b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e3992ab377d2879d6c5bfb220865638404b85dba ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 6a89e1614a..3ec207927b 100644
+index 1131379d63..56ba2041f2 100644
@@ -25 +27 @@
-@@ -1624,7 +1624,7 @@ struct ice_aqc_get_link_status_data {
+@@ -1621,7 +1621,7 @@ struct ice_aqc_get_link_status_data {
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (56 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
` (62 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Fabio Pricoco
Cc: xuemingl, Jacob Keller, Soumyadeep Hore, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cfaccd4bdaa1a89f4386adbe1a011012a386f56a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cfaccd4bdaa1a89f4386adbe1a011012a386f56a Mon Sep 17 00:00:00 2001
From: Fabio Pricoco <fabio.pricoco@intel.com>
Date: Fri, 23 Aug 2024 09:56:42 +0000
Subject: [PATCH] net/ice/base: fix iteration of TLVs in Preserved Fields Area
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit dcb760bf0f951b404bce33a1dd14906154b58c75 ]
The ice_get_pfa_module_tlv() function iterates over the Preserved Fields
Area to read data from the Shadow RAM, including the Part Board Assembly
data, among others.
If the specific TLV being requested is not found in the current NVM, the
code will read past the end of the PFA, misinterpreting the last word of
the PFA and the word just after the PFA as another TLV. This typically
results in one extra iteration before the length check of the while loop
is triggered.
Correct the logic for determining the maximum PFA offset to include the
extra last word. Additionally, make the driver robust against overflows
by using check_add_overflow. This ensures that even if the NVM provides
bogus data, the driver will not overflow, and will instead log a useful
warning message. The check for whether the TLV length exceeds the PFA
length is also removed, in favor of relying on the overflow warning
instead.
Fixes: 5d0b7b5fc491 ("net/ice/base: add read PBA module function")
Signed-off-by: Fabio Pricoco <fabio.pricoco@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Soumyadeep Hore <soumyadeep.hore@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ice/base/ice_nvm.c | 36 ++++++++++++++++++++++------------
1 file changed, 24 insertions(+), 12 deletions(-)
diff --git a/drivers/net/ice/base/ice_nvm.c b/drivers/net/ice/base/ice_nvm.c
index 6b0794f562..98c4c943ca 100644
--- a/drivers/net/ice/base/ice_nvm.c
+++ b/drivers/net/ice/base/ice_nvm.c
@@ -471,6 +471,8 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
return status;
}
+#define check_add_overflow __builtin_add_overflow
+
/**
* ice_get_pfa_module_tlv - Reads sub module TLV from NVM PFA
* @hw: pointer to hardware structure
@@ -487,8 +489,7 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
u16 module_type)
{
enum ice_status status;
- u16 pfa_len, pfa_ptr;
- u32 next_tlv;
+ u16 pfa_len, pfa_ptr, next_tlv, max_tlv;
status = ice_read_sr_word(hw, ICE_SR_PFA_PTR, &pfa_ptr);
if (status != ICE_SUCCESS) {
@@ -500,11 +501,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
ice_debug(hw, ICE_DBG_INIT, "Failed to read PFA length.\n");
return status;
}
- /* Starting with first TLV after PFA length, iterate through the list
+
+ if (check_add_overflow(pfa_ptr, (u16)(pfa_len - 1), &max_tlv)) {
+ ice_debug(hw, ICE_DBG_INIT, "PFA starts at offset %u. PFA length of %u caused 16-bit arithmetic overflow.\n",
+ pfa_ptr, pfa_len);
+ return ICE_ERR_INVAL_SIZE;
+ }
+
+ /* The Preserved Fields Area contains a sequence of TLVs which define
+ * its contents. The PFA length includes all of the TLVs, plus its
+ * initial length word itself, *and* one final word at the end of all
+ * of the TLVs.
+ *
+ * Starting with first TLV after PFA length, iterate through the list
* of TLVs to find the requested one.
*/
next_tlv = pfa_ptr + 1;
- while (next_tlv < ((u32)pfa_ptr + pfa_len)) {
+ while (next_tlv < max_tlv) {
u16 tlv_sub_module_type;
u16 tlv_len;
@@ -521,10 +534,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
ice_debug(hw, ICE_DBG_INIT, "Failed to read TLV length.\n");
break;
}
- if (tlv_len > pfa_len) {
- ice_debug(hw, ICE_DBG_INIT, "Invalid TLV length.\n");
- return ICE_ERR_INVAL_SIZE;
- }
if (tlv_sub_module_type == module_type) {
if (tlv_len) {
*module_tlv = (u16)next_tlv;
@@ -533,10 +542,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
}
return ICE_ERR_INVAL_SIZE;
}
- /* Check next TLV, i.e. current TLV pointer + length + 2 words
- * (for current TLV's type and length)
- */
- next_tlv = next_tlv + tlv_len + 2;
+
+ if (check_add_overflow(next_tlv, (u16)2, &next_tlv) ||
+ check_add_overflow(next_tlv, tlv_len, &next_tlv)) {
+ ice_debug(hw, ICE_DBG_INIT, "TLV of type %u and length 0x%04x caused 16-bit arithmetic overflow. The PFA starts at 0x%04x and has length of 0x%04x\n",
+ tlv_sub_module_type, tlv_len, pfa_ptr, pfa_len);
+ return ICE_ERR_INVAL_SIZE;
+ }
}
/* Module does not exist */
return ICE_ERR_DOES_NOT_EXIST;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:07.979390082 +0800
+++ 0058-net-ice-base-fix-iteration-of-TLVs-in-Preserved-Fiel.patch 2024-11-11 14:23:05.132192839 +0800
@@ -1 +1 @@
-From dcb760bf0f951b404bce33a1dd14906154b58c75 Mon Sep 17 00:00:00 2001
+From cfaccd4bdaa1a89f4386adbe1a011012a386f56a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit dcb760bf0f951b404bce33a1dd14906154b58c75 ]
@@ -25 +27,0 @@
-Cc: stable@dpdk.org
@@ -36 +38 @@
-index 5e982de4b5..56c6c96a95 100644
+index 6b0794f562..98c4c943ca 100644
@@ -39 +41 @@
-@@ -469,6 +469,8 @@ int ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
+@@ -471,6 +471,8 @@ enum ice_status ice_read_sr_word(struct ice_hw *hw, u16 offset, u16 *data)
@@ -48,2 +50 @@
-@@ -484,8 +486,7 @@ int
- ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+@@ -487,8 +489,7 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
@@ -51,0 +53 @@
+ enum ice_status status;
@@ -55 +56,0 @@
- int status;
@@ -58 +59,2 @@
-@@ -498,11 +499,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+ if (status != ICE_SUCCESS) {
+@@ -500,11 +501,23 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
@@ -84 +86 @@
-@@ -519,10 +532,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+@@ -521,10 +534,6 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
@@ -95 +97 @@
-@@ -531,10 +540,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
+@@ -533,10 +542,13 @@ ice_get_pfa_module_tlv(struct ice_hw *hw, u16 *module_tlv, u16 *module_tlv_len,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ixgbe/base: fix unchecked return value' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (57 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
` (61 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Barbara Skobiej; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1d3aa94783958688adbc498e1bf6413201dd28cb
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1d3aa94783958688adbc498e1bf6413201dd28cb Mon Sep 17 00:00:00 2001
From: Barbara Skobiej <barbara.skobiej@intel.com>
Date: Thu, 29 Aug 2024 10:00:11 +0100
Subject: [PATCH] net/ixgbe/base: fix unchecked return value
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit eb3684b191928ebb5d263e3f8ab1e309bfec099e ]
There was unchecked return value in the ixgbe_stop_mac_link_on_d3_82599
function. Added checking of return value from the called function
ixgbe_read_eeprom.
Fixes: b7ad3713b958 ("ixgbe/base: allow to disable link on D3")
Signed-off-by: Barbara Skobiej <barbara.skobiej@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/ixgbe/base/ixgbe_82599.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/drivers/net/ixgbe/base/ixgbe_82599.c b/drivers/net/ixgbe/base/ixgbe_82599.c
index c6e8b7e976..f37d83a0ab 100644
--- a/drivers/net/ixgbe/base/ixgbe_82599.c
+++ b/drivers/net/ixgbe/base/ixgbe_82599.c
@@ -554,13 +554,15 @@ out:
**/
void ixgbe_stop_mac_link_on_d3_82599(struct ixgbe_hw *hw)
{
- u32 autoc2_reg;
u16 ee_ctrl_2 = 0;
+ u32 autoc2_reg;
+ u32 status;
DEBUGFUNC("ixgbe_stop_mac_link_on_d3_82599");
- ixgbe_read_eeprom(hw, IXGBE_EEPROM_CTRL_2, &ee_ctrl_2);
+ status = ixgbe_read_eeprom(hw, IXGBE_EEPROM_CTRL_2, &ee_ctrl_2);
- if (!ixgbe_mng_present(hw) && !hw->wol_enabled &&
+ if (status == IXGBE_SUCCESS &&
+ !ixgbe_mng_present(hw) && !hw->wol_enabled &&
ee_ctrl_2 & IXGBE_EEPROM_CCD_BIT) {
autoc2_reg = IXGBE_READ_REG(hw, IXGBE_AUTOC2);
autoc2_reg |= IXGBE_AUTOC2_LINK_DISABLE_ON_D3_MASK;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.057296280 +0800
+++ 0059-net-ixgbe-base-fix-unchecked-return-value.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From eb3684b191928ebb5d263e3f8ab1e309bfec099e Mon Sep 17 00:00:00 2001
+From 1d3aa94783958688adbc498e1bf6413201dd28cb Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit eb3684b191928ebb5d263e3f8ab1e309bfec099e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index c4ad906f0f..3110477700 100644
+index c6e8b7e976..f37d83a0ab 100644
@@ -24 +26 @@
-@@ -556,13 +556,15 @@ out:
+@@ -554,13 +554,15 @@ out:
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix setting flags in init function' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (58 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
` (60 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c04a0407ada46498d4a261aa53fca0f0ef118cb2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c04a0407ada46498d4a261aa53fca0f0ef118cb2 Mon Sep 17 00:00:00 2001
From: Anatoly Burakov <anatoly.burakov@intel.com>
Date: Mon, 2 Sep 2024 10:54:17 +0100
Subject: [PATCH] net/i40e/base: fix setting flags in init function
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit deb7c447d088903d06a76e2c719a8207c94a576e ]
The functionality to set i40e_hw's flags was moved to its own function
in AQ a while ago. However, the setting of hw->flags for X722 was not
removed, even though it has become unnecessary.
Fixes: 37b091c75b13 ("net/i40e/base: extend PHY access AQ command")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 3 ---
1 file changed, 3 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index ab655a0a72..821fb2fb36 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -1019,9 +1019,6 @@ enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw)
else
hw->pf_id = (u8)(func_rid & 0x7);
- if (hw->mac.type == I40E_MAC_X722)
- hw->flags |= I40E_HW_FLAG_AQ_SRCTL_ACCESS_ENABLE |
- I40E_HW_FLAG_NVM_READ_REQUIRES_LOCK;
/* NVMUpdate features structure initialization */
hw->nvmupd_features.major = I40E_NVMUPD_FEATURES_API_VER_MAJOR;
hw->nvmupd_features.minor = I40E_NVMUPD_FEATURES_API_VER_MINOR;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.113492979 +0800
+++ 0060-net-i40e-base-fix-setting-flags-in-init-function.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From deb7c447d088903d06a76e2c719a8207c94a576e Mon Sep 17 00:00:00 2001
+From c04a0407ada46498d4a261aa53fca0f0ef118cb2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit deb7c447d088903d06a76e2c719a8207c94a576e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index e4de508aea..451cc2c1c7 100644
+index ab655a0a72..821fb2fb36 100644
@@ -23 +25 @@
-@@ -980,9 +980,6 @@ enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw)
+@@ -1019,9 +1019,6 @@ enum i40e_status_code i40e_init_shared_code(struct i40e_hw *hw)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix misleading debug logs and comments' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (59 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
` (59 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Aleksandr Loktionov
Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5ffbf6263003cc122adc4edadab31329d028c3c2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5ffbf6263003cc122adc4edadab31329d028c3c2 Mon Sep 17 00:00:00 2001
From: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Date: Mon, 2 Sep 2024 10:54:18 +0100
Subject: [PATCH] net/i40e/base: fix misleading debug logs and comments
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 719ec1bfebde956b661d403ef73ecb1e7483d50f ]
Both comments and debug logs for i40e_read_nvm_aq refer to writing, when
in actuality it's a read function. Fix both comments and debug logs.
Fixes: a8ac0bae54ae ("i40e/base: update shadow RAM read/write")
Signed-off-by: Aleksandr Loktionov <aleksandr.loktionov@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_nvm.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_nvm.c b/drivers/net/i40e/base/i40e_nvm.c
index f385042601..05816a4b79 100644
--- a/drivers/net/i40e/base/i40e_nvm.c
+++ b/drivers/net/i40e/base/i40e_nvm.c
@@ -223,11 +223,11 @@ read_nvm_exit:
* @hw: pointer to the HW structure.
* @module_pointer: module pointer location in words from the NVM beginning
* @offset: offset in words from module start
- * @words: number of words to write
- * @data: buffer with words to write to the Shadow RAM
+ * @words: number of words to read
+ * @data: buffer with words to read from the Shadow RAM
* @last_command: tells the AdminQ that this is the last command
*
- * Writes a 16 bit words buffer to the Shadow RAM using the admin command.
+ * Reads a 16 bit words buffer to the Shadow RAM using the admin command.
**/
STATIC enum i40e_status_code i40e_read_nvm_aq(struct i40e_hw *hw,
u8 module_pointer, u32 offset,
@@ -249,18 +249,18 @@ STATIC enum i40e_status_code i40e_read_nvm_aq(struct i40e_hw *hw,
*/
if ((offset + words) > hw->nvm.sr_size)
i40e_debug(hw, I40E_DEBUG_NVM,
- "NVM write error: offset %d beyond Shadow RAM limit %d\n",
+ "NVM read error: offset %d beyond Shadow RAM limit %d\n",
(offset + words), hw->nvm.sr_size);
else if (words > I40E_SR_SECTOR_SIZE_IN_WORDS)
- /* We can write only up to 4KB (one sector), in one AQ write */
+ /* We can read only up to 4KB (one sector), in one AQ read */
i40e_debug(hw, I40E_DEBUG_NVM,
- "NVM write fail error: tried to write %d words, limit is %d.\n",
+ "NVM read fail error: tried to read %d words, limit is %d.\n",
words, I40E_SR_SECTOR_SIZE_IN_WORDS);
else if (((offset + (words - 1)) / I40E_SR_SECTOR_SIZE_IN_WORDS)
!= (offset / I40E_SR_SECTOR_SIZE_IN_WORDS))
- /* A single write cannot spread over two sectors */
+ /* A single read cannot spread over two sectors */
i40e_debug(hw, I40E_DEBUG_NVM,
- "NVM write error: cannot spread over two sectors in a single write offset=%d words=%d\n",
+ "NVM read error: cannot spread over two sectors in a single read offset=%d words=%d\n",
offset, words);
else
ret_code = i40e_aq_read_nvm(hw, module_pointer,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.148919978 +0800
+++ 0061-net-i40e-base-fix-misleading-debug-logs-and-comments.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From 719ec1bfebde956b661d403ef73ecb1e7483d50f Mon Sep 17 00:00:00 2001
+From 5ffbf6263003cc122adc4edadab31329d028c3c2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 719ec1bfebde956b661d403ef73ecb1e7483d50f ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: add missing X710TL device check' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (60 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
` (58 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Anatoly Burakov; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9f966b465d8ce98430feeb81a1db414b83a26a8d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9f966b465d8ce98430feeb81a1db414b83a26a8d Mon Sep 17 00:00:00 2001
From: Anatoly Burakov <anatoly.burakov@intel.com>
Date: Mon, 2 Sep 2024 10:54:19 +0100
Subject: [PATCH] net/i40e/base: add missing X710TL device check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 597e19e7eae17beb820795c3a8a97c547870ba26 ]
Commit c0ce1c4677fd ("net/i40e: add new X722 device") added a new X722
define as well as one for X710T*L (that wasn't called out in commit
message), however it was not added to the I40E_IS_X710TL_DEVICE check.
This patch adds the missing define to the check.
Fixes: c0ce1c4677fd ("net/i40e: add new X722 device")
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_devids.h | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/base/i40e_devids.h b/drivers/net/i40e/base/i40e_devids.h
index ee31e51f57..37d7ee9939 100644
--- a/drivers/net/i40e/base/i40e_devids.h
+++ b/drivers/net/i40e/base/i40e_devids.h
@@ -42,7 +42,8 @@
#define I40E_DEV_ID_10G_SFP 0x104E
#define I40E_IS_X710TL_DEVICE(d) \
(((d) == I40E_DEV_ID_10G_BASE_T_BC) || \
- ((d) == I40E_DEV_ID_5G_BASE_T_BC))
+ ((d) == I40E_DEV_ID_5G_BASE_T_BC) || \
+ ((d) == I40E_DEV_ID_1G_BASE_T_BC))
#define I40E_DEV_ID_KX_X722 0x37CE
#define I40E_DEV_ID_QSFP_X722 0x37CF
#define I40E_DEV_ID_SFP_X722 0x37D0
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.181442678 +0800
+++ 0062-net-i40e-base-add-missing-X710TL-device-check.patch 2024-11-11 14:23:05.142192839 +0800
@@ -1 +1 @@
-From 597e19e7eae17beb820795c3a8a97c547870ba26 Mon Sep 17 00:00:00 2001
+From 9f966b465d8ce98430feeb81a1db414b83a26a8d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 597e19e7eae17beb820795c3a8a97c547870ba26 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 11e98f1f28..0a323566d1 100644
+index ee31e51f57..37d7ee9939 100644
@@ -24,2 +26,2 @@
-@@ -31,7 +31,8 @@
- #define I40E_DEV_ID_1G_BASE_T_BC 0x0DD2
+@@ -42,7 +42,8 @@
+ #define I40E_DEV_ID_10G_SFP 0x104E
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix blinking X722 with X557 PHY' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (61 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
` (57 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Eryk Rybak; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cd6164532228038bc6c4d96e51e83f241a53b7da
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cd6164532228038bc6c4d96e51e83f241a53b7da Mon Sep 17 00:00:00 2001
From: Eryk Rybak <eryk.roch.rybak@intel.com>
Date: Mon, 2 Sep 2024 10:54:23 +0100
Subject: [PATCH] net/i40e/base: fix blinking X722 with X557 PHY
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit bf0183e9ab98c946e0c7e178149e4b685465b9b1 ]
On x722 with x557 PHY LEDs do not blink under certain circumstances,
because the function was attempting to avoid triggering LED activity when
it detected that LED was already active. Fix it to just always trigger
LED blinking regardless of the LED state.
Fixes: 8db9e2a1b232 ("i40e: base driver")
Signed-off-by: Eryk Rybak <eryk.roch.rybak@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 32 -----------------------------
1 file changed, 32 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 821fb2fb36..243c547d19 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -1587,7 +1587,6 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx)
**/
u32 i40e_led_get(struct i40e_hw *hw)
{
- u32 current_mode = 0;
u32 mode = 0;
int i;
@@ -1600,21 +1599,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
if (!gpio_val)
continue;
- /* ignore gpio LED src mode entries related to the activity
- * LEDs
- */
- current_mode = ((gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK)
- >> I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT);
- switch (current_mode) {
- case I40E_COMBINED_ACTIVITY:
- case I40E_FILTER_ACTIVITY:
- case I40E_MAC_ACTIVITY:
- case I40E_LINK_ACTIVITY:
- continue;
- default:
- break;
- }
-
mode = (gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK) >>
I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT;
break;
@@ -1634,7 +1618,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
**/
void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
{
- u32 current_mode = 0;
int i;
if (mode & ~I40E_LED_MODE_VALID) {
@@ -1651,21 +1634,6 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
if (!gpio_val)
continue;
- /* ignore gpio LED src mode entries related to the activity
- * LEDs
- */
- current_mode = ((gpio_val & I40E_GLGEN_GPIO_CTL_LED_MODE_MASK)
- >> I40E_GLGEN_GPIO_CTL_LED_MODE_SHIFT);
- switch (current_mode) {
- case I40E_COMBINED_ACTIVITY:
- case I40E_FILTER_ACTIVITY:
- case I40E_MAC_ACTIVITY:
- case I40E_LINK_ACTIVITY:
- continue;
- default:
- break;
- }
-
if (I40E_IS_X710TL_DEVICE(hw->device_id)) {
u32 pin_func = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.214233477 +0800
+++ 0063-net-i40e-base-fix-blinking-X722-with-X557-PHY.patch 2024-11-11 14:23:05.152192839 +0800
@@ -1 +1 @@
-From bf0183e9ab98c946e0c7e178149e4b685465b9b1 Mon Sep 17 00:00:00 2001
+From cd6164532228038bc6c4d96e51e83f241a53b7da Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit bf0183e9ab98c946e0c7e178149e4b685465b9b1 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index be27cc9d0b..80500697ed 100644
+index 821fb2fb36..243c547d19 100644
@@ -25 +27 @@
-@@ -1548,7 +1548,6 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx)
+@@ -1587,7 +1587,6 @@ static u32 i40e_led_is_mine(struct i40e_hw *hw, int idx)
@@ -33 +35 @@
-@@ -1561,21 +1560,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
+@@ -1600,21 +1599,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
@@ -55 +57 @@
-@@ -1595,7 +1579,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
+@@ -1634,7 +1618,6 @@ u32 i40e_led_get(struct i40e_hw *hw)
@@ -63 +65 @@
-@@ -1612,21 +1595,6 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
+@@ -1651,21 +1634,6 @@ void i40e_led_set(struct i40e_hw *hw, u32 mode, bool blink)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix DDP loading with reserved track ID' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (62 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
` (56 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Artur Tyminski; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=77d7459a65fb61be0f040a9ce78ae63d360e4f4e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 77d7459a65fb61be0f040a9ce78ae63d360e4f4e Mon Sep 17 00:00:00 2001
From: Artur Tyminski <arturx.tyminski@intel.com>
Date: Mon, 2 Sep 2024 10:54:26 +0100
Subject: [PATCH] net/i40e/base: fix DDP loading with reserved track ID
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f646061cd9328f1265d8b9996c9b734ab2ce3707 ]
Packages with reserved track IDs should not be loaded, yet currently,
the driver will only check one of the reserved ID's, but not the other.
Fix the DDP package loading to also check for the other reserved track
ID.
Fixes: 496a357f1118 ("net/i40e/base: extend processing of DDP")
Signed-off-by: Artur Tyminski <arturx.tyminski@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 243c547d19..4d565963a9 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -8168,7 +8168,8 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
u32 sec_off;
u32 i;
- if (track_id == I40E_DDP_TRACKID_INVALID) {
+ if (track_id == I40E_DDP_TRACKID_INVALID ||
+ track_id == I40E_DDP_TRACKID_RDONLY) {
i40e_debug(hw, I40E_DEBUG_PACKAGE, "Invalid track_id\n");
return I40E_NOT_SUPPORTED;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.250022976 +0800
+++ 0064-net-i40e-base-fix-DDP-loading-with-reserved-track-ID.patch 2024-11-11 14:23:05.162192839 +0800
@@ -1 +1 @@
-From f646061cd9328f1265d8b9996c9b734ab2ce3707 Mon Sep 17 00:00:00 2001
+From 77d7459a65fb61be0f040a9ce78ae63d360e4f4e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f646061cd9328f1265d8b9996c9b734ab2ce3707 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index a43b89aaeb..693608ac99 100644
+index 243c547d19..4d565963a9 100644
@@ -25 +27 @@
-@@ -8048,7 +8048,8 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
+@@ -8168,7 +8168,8 @@ i40e_validate_profile(struct i40e_hw *hw, struct i40e_profile_segment *profile,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix repeated register dumps' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (63 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
` (55 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Radoslaw Tyl; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7c44c1b3f8b013f9e53710ddc05663dfe98a8f33
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7c44c1b3f8b013f9e53710ddc05663dfe98a8f33 Mon Sep 17 00:00:00 2001
From: Radoslaw Tyl <radoslawx.tyl@intel.com>
Date: Mon, 2 Sep 2024 10:54:33 +0100
Subject: [PATCH] net/i40e/base: fix repeated register dumps
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit efc6a6b1facfa160e5e72f55893a301a6b27c628 ]
Currently, when registers are dumped, the data inside them is changed,
so repeated dumps lead to unexpected results. Fix this by making
register list read-only.
Fixes: 8db9e2a1b232 ("i40e: base driver")
Signed-off-by: Radoslaw Tyl <radoslawx.tyl@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_diag.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_diag.c b/drivers/net/i40e/base/i40e_diag.c
index b3c4cfd3aa..4ca102cdd5 100644
--- a/drivers/net/i40e/base/i40e_diag.c
+++ b/drivers/net/i40e/base/i40e_diag.c
@@ -55,7 +55,7 @@ static enum i40e_status_code i40e_diag_reg_pattern_test(struct i40e_hw *hw,
return I40E_SUCCESS;
}
-static struct i40e_diag_reg_test_info i40e_reg_list[] = {
+static const struct i40e_diag_reg_test_info i40e_reg_list[] = {
/* offset mask elements stride */
{I40E_QTX_CTL(0), 0x0000FFBF, 1, I40E_QTX_CTL(1) - I40E_QTX_CTL(0)},
{I40E_PFINT_ITR0(0), 0x00000FFF, 3, I40E_PFINT_ITR0(1) - I40E_PFINT_ITR0(0)},
@@ -81,28 +81,28 @@ enum i40e_status_code i40e_diag_reg_test(struct i40e_hw *hw)
{
enum i40e_status_code ret_code = I40E_SUCCESS;
u32 reg, mask;
+ u32 elements;
u32 i, j;
for (i = 0; i40e_reg_list[i].offset != 0 &&
ret_code == I40E_SUCCESS; i++) {
+ elements = i40e_reg_list[i].elements;
/* set actual reg range for dynamically allocated resources */
if (i40e_reg_list[i].offset == I40E_QTX_CTL(0) &&
hw->func_caps.num_tx_qp != 0)
- i40e_reg_list[i].elements = hw->func_caps.num_tx_qp;
+ elements = hw->func_caps.num_tx_qp;
if ((i40e_reg_list[i].offset == I40E_PFINT_ITRN(0, 0) ||
i40e_reg_list[i].offset == I40E_PFINT_ITRN(1, 0) ||
i40e_reg_list[i].offset == I40E_PFINT_ITRN(2, 0) ||
i40e_reg_list[i].offset == I40E_QINT_TQCTL(0) ||
i40e_reg_list[i].offset == I40E_QINT_RQCTL(0)) &&
hw->func_caps.num_msix_vectors != 0)
- i40e_reg_list[i].elements =
- hw->func_caps.num_msix_vectors - 1;
+ elements = hw->func_caps.num_msix_vectors - 1;
/* test register access */
mask = i40e_reg_list[i].mask;
- for (j = 0; j < i40e_reg_list[i].elements &&
- ret_code == I40E_SUCCESS; j++) {
+ for (j = 0; j < elements && ret_code == I40E_SUCCESS; j++) {
reg = i40e_reg_list[i].offset
+ (j * i40e_reg_list[i].stride);
ret_code = i40e_diag_reg_pattern_test(hw, reg, mask);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.297586475 +0800
+++ 0065-net-i40e-base-fix-repeated-register-dumps.patch 2024-11-11 14:23:05.162192839 +0800
@@ -1 +1 @@
-From efc6a6b1facfa160e5e72f55893a301a6b27c628 Mon Sep 17 00:00:00 2001
+From 7c44c1b3f8b013f9e53710ddc05663dfe98a8f33 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit efc6a6b1facfa160e5e72f55893a301a6b27c628 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix unchecked return value' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (64 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
` (54 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Barbara Skobiej; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bfc9bdcad325d486a798120507a7d93bfce2473d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bfc9bdcad325d486a798120507a7d93bfce2473d Mon Sep 17 00:00:00 2001
From: Barbara Skobiej <barbara.skobiej@intel.com>
Date: Mon, 2 Sep 2024 10:54:34 +0100
Subject: [PATCH] net/i40e/base: fix unchecked return value
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7fb34b9141aab299c2b84656ec5b12bf41f1c21d ]
Static analysis tools have reported an unchecked return value warning.
Address the warning by checking return value.
Fixes: 2450cc2dc871 ("i40e/base: find partition id in NPAR mode and disable FCoE")
Signed-off-by: Barbara Skobiej <barbara.skobiej@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_common.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_common.c b/drivers/net/i40e/base/i40e_common.c
index 4d565963a9..547f5e3c2c 100644
--- a/drivers/net/i40e/base/i40e_common.c
+++ b/drivers/net/i40e/base/i40e_common.c
@@ -4228,8 +4228,8 @@ STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
/* use AQ read to get the physical register offset instead
* of the port relative offset
*/
- i40e_aq_debug_read_register(hw, port_cfg_reg, &port_cfg, NULL);
- if (!(port_cfg & I40E_PRTGEN_CNF_PORT_DIS_MASK))
+ status = i40e_aq_debug_read_register(hw, port_cfg_reg, &port_cfg, NULL);
+ if ((status == I40E_SUCCESS) && (!(port_cfg & I40E_PRTGEN_CNF_PORT_DIS_MASK)))
hw->num_ports++;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.330951475 +0800
+++ 0066-net-i40e-base-fix-unchecked-return-value.patch 2024-11-11 14:23:05.162192839 +0800
@@ -1 +1 @@
-From 7fb34b9141aab299c2b84656ec5b12bf41f1c21d Mon Sep 17 00:00:00 2001
+From bfc9bdcad325d486a798120507a7d93bfce2473d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7fb34b9141aab299c2b84656ec5b12bf41f1c21d ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 416f31dcc3..07e18deaea 100644
+index 4d565963a9..547f5e3c2c 100644
@@ -23 +25 @@
-@@ -4215,8 +4215,8 @@ STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
+@@ -4228,8 +4228,8 @@ STATIC void i40e_parse_discover_capabilities(struct i40e_hw *hw, void *buff,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e/base: fix loop bounds' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (65 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
` (53 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Barbara Skobiej; +Cc: xuemingl, Anatoly Burakov, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=453f75257c416c6fd10b2011fcd463275e017ab4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 453f75257c416c6fd10b2011fcd463275e017ab4 Mon Sep 17 00:00:00 2001
From: Barbara Skobiej <barbara.skobiej@intel.com>
Date: Mon, 2 Sep 2024 10:54:35 +0100
Subject: [PATCH] net/i40e/base: fix loop bounds
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3e61fe48412f46daa66f7ccc8f03b1e7620d0b64 ]
An unchecked value used as a loop bound. Add verification if value of
'next_to_clean' variable is greater than 2^10 (next_to_clean is 10 bits).
Also, refactored loop so that it reads the head value only once, and also
checks if head is invalid.
Fixes: 8db9e2a1b232 ("i40e: base driver")
Signed-off-by: Barbara Skobiej <barbara.skobiej@intel.com>
Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/base/i40e_adminq.c | 19 +++++++++++++++++--
1 file changed, 17 insertions(+), 2 deletions(-)
diff --git a/drivers/net/i40e/base/i40e_adminq.c b/drivers/net/i40e/base/i40e_adminq.c
index 27c82d9b44..cd3b0f2e45 100644
--- a/drivers/net/i40e/base/i40e_adminq.c
+++ b/drivers/net/i40e/base/i40e_adminq.c
@@ -791,12 +791,26 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
u16 ntc = asq->next_to_clean;
struct i40e_aq_desc desc_cb;
struct i40e_aq_desc *desc;
+ u32 head = 0;
+
+ if (ntc >= (1 << 10))
+ goto clean_asq_exit;
desc = I40E_ADMINQ_DESC(*asq, ntc);
details = I40E_ADMINQ_DETAILS(*asq, ntc);
- while (rd32(hw, hw->aq.asq.head) != ntc) {
+ while (true) {
+ head = rd32(hw, hw->aq.asq.head);
+
+ if (head >= asq->count) {
+ i40e_debug(hw, I40E_DEBUG_AQ_COMMAND, "Read head value is improper\n");
+ return 0;
+ }
+
+ if (head == ntc)
+ break;
+
i40e_debug(hw, I40E_DEBUG_AQ_COMMAND,
- "ntc %d head %d.\n", ntc, rd32(hw, hw->aq.asq.head));
+ "ntc %d head %d.\n", ntc, head);
if (details->callback) {
I40E_ADMINQ_CALLBACK cb_func =
@@ -816,6 +830,7 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
asq->next_to_clean = ntc;
+clean_asq_exit:
return I40E_DESC_UNUSED(asq);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.369502474 +0800
+++ 0067-net-i40e-base-fix-loop-bounds.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 3e61fe48412f46daa66f7ccc8f03b1e7620d0b64 Mon Sep 17 00:00:00 2001
+From 453f75257c416c6fd10b2011fcd463275e017ab4 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3e61fe48412f46daa66f7ccc8f03b1e7620d0b64 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index b670250180..350288269b 100644
+index 27c82d9b44..cd3b0f2e45 100644
@@ -26 +28 @@
-@@ -745,12 +745,26 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
+@@ -791,12 +791,26 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
@@ -55 +57 @@
-@@ -770,6 +784,7 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
+@@ -816,6 +830,7 @@ u16 i40e_clean_asq(struct i40e_hw *hw)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/iavf: delay VF reset command' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (66 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
` (52 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, David Marchand, Hongbo Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=07e76f97817655b73aeebfd3dff698a4a1f5be4d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 07e76f97817655b73aeebfd3dff698a4a1f5be4d Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Mon, 9 Sep 2024 12:03:56 +0100
Subject: [PATCH] net/iavf: delay VF reset command
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b34fe66ea893c74f09322dc1109e80e81faa7d4f ]
Commit 0f9ec0cbd2a9 ("net/iavf: fix VF reset when using DCF"),
introduced a VF-reset adminq call into the reset sequence for iavf.
However, that call was very early in the sequence before other adminq
commands had been sent.
To delay the VF reset, we can put the message sending in the "dev_close"
function, right before the adminq is shut down, and thereby guaranteeing
that we won't have any subsequent issues with adminq messages.
In the process of making this change, we can also use the iavf_vf_reset
function from common/iavf, rather than hard-coding the message sending
lower-level calls in the net driver.
Fixes: e74e1bb6280d ("net/iavf: enable port reset")
Fixes: 0f9ec0cbd2a9 ("net/iavf: fix VF reset when using DCF")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: David Marchand <david.marchand@redhat.com>
Tested-by: Hongbo Li <hongbox.li@intel.com>
---
drivers/common/iavf/iavf_prototype.h | 1 +
drivers/common/iavf/version.map | 1 +
drivers/net/iavf/iavf_ethdev.c | 12 +-----------
3 files changed, 3 insertions(+), 11 deletions(-)
diff --git a/drivers/common/iavf/iavf_prototype.h b/drivers/common/iavf/iavf_prototype.h
index ba78ec5169..7c43a817bb 100644
--- a/drivers/common/iavf/iavf_prototype.h
+++ b/drivers/common/iavf/iavf_prototype.h
@@ -79,6 +79,7 @@ STATIC INLINE struct iavf_rx_ptype_decoded decode_rx_desc_ptype(u8 ptype)
__rte_internal
void iavf_vf_parse_hw_config(struct iavf_hw *hw,
struct virtchnl_vf_resource *msg);
+__rte_internal
enum iavf_status iavf_vf_reset(struct iavf_hw *hw);
__rte_internal
enum iavf_status iavf_aq_send_msg_to_pf(struct iavf_hw *hw,
diff --git a/drivers/common/iavf/version.map b/drivers/common/iavf/version.map
index e0f117197c..6c1427cca4 100644
--- a/drivers/common/iavf/version.map
+++ b/drivers/common/iavf/version.map
@@ -7,6 +7,7 @@ INTERNAL {
iavf_set_mac_type;
iavf_shutdown_adminq;
iavf_vf_parse_hw_config;
+ iavf_vf_reset;
local: *;
};
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 9087909ec2..1a98c7734c 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -2875,6 +2875,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
if (vf->promisc_unicast_enabled || vf->promisc_multicast_enabled)
iavf_config_promisc(adapter, false, false);
+ iavf_vf_reset(hw);
iavf_shutdown_adminq(hw);
if (vf->vf_res->vf_cap_flags & VIRTCHNL_VF_OFFLOAD_WB_ON_ITR) {
/* disable uio intr before callback unregister */
@@ -2954,17 +2955,6 @@ iavf_dev_reset(struct rte_eth_dev *dev)
struct iavf_adapter *adapter =
IAVF_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private);
struct iavf_hw *hw = IAVF_DEV_PRIVATE_TO_HW(dev->data->dev_private);
- struct iavf_info *vf = IAVF_DEV_PRIVATE_TO_VF(dev->data->dev_private);
-
- if (!vf->in_reset_recovery) {
- ret = iavf_aq_send_msg_to_pf(hw, VIRTCHNL_OP_RESET_VF,
- IAVF_SUCCESS, NULL, 0, NULL);
- if (ret) {
- PMD_DRV_LOG(ERR, "fail to send cmd VIRTCHNL_OP_RESET_VF");
- return ret;
- }
- }
-
/*
* Check whether the VF reset has been done and inform application,
* to avoid calling the virtual channel command, which may cause
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.403242673 +0800
+++ 0068-net-iavf-delay-VF-reset-command.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From b34fe66ea893c74f09322dc1109e80e81faa7d4f Mon Sep 17 00:00:00 2001
+From 07e76f97817655b73aeebfd3dff698a4a1f5be4d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b34fe66ea893c74f09322dc1109e80e81faa7d4f ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
@@ -57 +59 @@
-index c56fcfadf0..c200f63b4f 100644
+index 9087909ec2..1a98c7734c 100644
@@ -60 +62 @@
-@@ -2962,6 +2962,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
+@@ -2875,6 +2875,7 @@ iavf_dev_close(struct rte_eth_dev *dev)
@@ -68 +70 @@
-@@ -3041,17 +3042,6 @@ iavf_dev_reset(struct rte_eth_dev *dev)
+@@ -2954,17 +2955,6 @@ iavf_dev_reset(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (67 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/ice: " Xueming Li
` (51 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ff90a3bb8523c29d5e02b6ff2c8e79345ba177be
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ff90a3bb8523c29d5e02b6ff2c8e79345ba177be Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:24 +0100
Subject: [PATCH] net/i40e: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2d040df2437a025ef6d2ecf72de96d5c9fe97439 ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: 5171b4ee6b6b ("net/i40e: optimize Tx by using AVX512")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/net/i40e/i40e_rxtx_vec_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/i40e/i40e_rxtx_vec_avx512.c b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
index f3050cd06c..62fce19dc4 100644
--- a/drivers/net/i40e/i40e_rxtx_vec_avx512.c
+++ b/drivers/net/i40e/i40e_rxtx_vec_avx512.c
@@ -799,6 +799,7 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_load_si512(&txep[copied]);
const __m512i b = _mm512_load_si512(&txep[copied + 8]);
const __m512i c = _mm512_load_si512(&txep[copied + 16]);
@@ -808,6 +809,12 @@ i40e_tx_free_bufs_avx512(struct i40e_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_load_si512(&txep[copied]);
+ const __m512i b = _mm512_load_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.441488072 +0800
+++ 0069-net-i40e-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 2d040df2437a025ef6d2ecf72de96d5c9fe97439 Mon Sep 17 00:00:00 2001
+From ff90a3bb8523c29d5e02b6ff2c8e79345ba177be Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2d040df2437a025ef6d2ecf72de96d5c9fe97439 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 0238b03f8a..3b2750221b 100644
+index f3050cd06c..62fce19dc4 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ice: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (68 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: " Xueming Li
` (50 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d3a59470caf12bea8b3cf166d7965509b2e1de5a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d3a59470caf12bea8b3cf166d7965509b2e1de5a Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:25 +0100
Subject: [PATCH] net/ice: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit da97aeafca4cdd40892ffb7e628bb15dcf9c0f25 ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: a4e480de268e ("net/ice: optimize Tx by using AVX512")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/net/ice/ice_rxtx_vec_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/ice/ice_rxtx_vec_avx512.c b/drivers/net/ice/ice_rxtx_vec_avx512.c
index 04148e8ea2..add095ef06 100644
--- a/drivers/net/ice/ice_rxtx_vec_avx512.c
+++ b/drivers/net/ice/ice_rxtx_vec_avx512.c
@@ -907,6 +907,7 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_loadu_si512(&txep[copied]);
const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
@@ -916,6 +917,12 @@ ice_tx_free_bufs_avx512(struct ice_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_loadu_si512(&txep[copied]);
+ const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.474181072 +0800
+++ 0070-net-ice-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From da97aeafca4cdd40892ffb7e628bb15dcf9c0f25 Mon Sep 17 00:00:00 2001
+From d3a59470caf12bea8b3cf166d7965509b2e1de5a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit da97aeafca4cdd40892ffb7e628bb15dcf9c0f25 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/iavf: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (69 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/ice: " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: " Xueming Li
` (49 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bbcdd3ea1a4193c5e2198cabdd843899669acf61
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bbcdd3ea1a4193c5e2198cabdd843899669acf61 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:26 +0100
Subject: [PATCH] net/iavf: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 77608b24bdd840d323ebd9cb6ffffaf5c760983e ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: 9ab9514c150e ("net/iavf: enable AVX512 for Tx")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/net/iavf/iavf_rxtx_vec_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 7a7df6d258..0e94eada4a 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -1892,6 +1892,7 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_loadu_si512(&txep[copied]);
const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
@@ -1901,6 +1902,12 @@ iavf_tx_free_bufs_avx512(struct iavf_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_loadu_si512(&txep[copied]);
+ const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.507644371 +0800
+++ 0071-net-iavf-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 77608b24bdd840d323ebd9cb6ffffaf5c760983e Mon Sep 17 00:00:00 2001
+From bbcdd3ea1a4193c5e2198cabdd843899669acf61 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 77608b24bdd840d323ebd9cb6ffffaf5c760983e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 3bb6f305df..d6a861bf80 100644
+index 7a7df6d258..0e94eada4a 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/idpf: fix AVX-512 pointer copy on 32-bit' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (70 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/iavf: " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
` (48 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Bruce Richardson; +Cc: xuemingl, Ian Stokes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fbbdaf66bbe686f9f864e55f4dc90a95f0b49637
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fbbdaf66bbe686f9f864e55f4dc90a95f0b49637 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 6 Sep 2024 15:11:27 +0100
Subject: [PATCH] common/idpf: fix AVX-512 pointer copy on 32-bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d16364e3bdbfd9e07a487bf776a829c565337e3c ]
The size of a pointer on 32-bit is only 4 rather than 8 bytes, so
copying 32 pointers only requires half the number of AVX-512 load store
operations.
Fixes: 5bf87b45b2c8 ("net/idpf: add AVX512 data path for single queue model")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ian Stokes <ian.stokes@intel.com>
---
drivers/common/idpf/idpf_common_rxtx_avx512.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/drivers/common/idpf/idpf_common_rxtx_avx512.c b/drivers/common/idpf/idpf_common_rxtx_avx512.c
index f65e8d512b..5abafc729b 100644
--- a/drivers/common/idpf/idpf_common_rxtx_avx512.c
+++ b/drivers/common/idpf/idpf_common_rxtx_avx512.c
@@ -1043,6 +1043,7 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_queue *txq)
uint32_t copied = 0;
/* n is multiple of 32 */
while (copied < n) {
+#ifdef RTE_ARCH_64
const __m512i a = _mm512_loadu_si512(&txep[copied]);
const __m512i b = _mm512_loadu_si512(&txep[copied + 8]);
const __m512i c = _mm512_loadu_si512(&txep[copied + 16]);
@@ -1052,6 +1053,12 @@ idpf_tx_singleq_free_bufs_avx512(struct idpf_tx_queue *txq)
_mm512_storeu_si512(&cache_objs[copied + 8], b);
_mm512_storeu_si512(&cache_objs[copied + 16], c);
_mm512_storeu_si512(&cache_objs[copied + 24], d);
+#else
+ const __m512i a = _mm512_loadu_si512(&txep[copied]);
+ const __m512i b = _mm512_loadu_si512(&txep[copied + 16]);
+ _mm512_storeu_si512(&cache_objs[copied], a);
+ _mm512_storeu_si512(&cache_objs[copied + 16], b);
+#endif
copied += 32;
}
cache->len += n;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.545663870 +0800
+++ 0072-common-idpf-fix-AVX-512-pointer-copy-on-32-bit.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From d16364e3bdbfd9e07a487bf776a829c565337e3c Mon Sep 17 00:00:00 2001
+From fbbdaf66bbe686f9f864e55f4dc90a95f0b49637 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d16364e3bdbfd9e07a487bf776a829c565337e3c ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 3b5e124ec8..b8450b03ae 100644
+index f65e8d512b..5abafc729b 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve: fix queue setup and stop' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (71 preceding siblings ...)
2024-11-11 6:27 ` patch 'common/idpf: " Xueming Li
@ 2024-11-11 6:27 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
` (47 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:27 UTC (permalink / raw)
To: Tathagat Priyadarshi; +Cc: xuemingl, Joshua Washington, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=33a52ddb8c7ebab75384199c141ce6097dd23855
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 33a52ddb8c7ebab75384199c141ce6097dd23855 Mon Sep 17 00:00:00 2001
From: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Date: Wed, 31 Jul 2024 05:26:43 +0000
Subject: [PATCH] net/gve: fix queue setup and stop
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7174c8891dcfb2a148e03c5fe2f200742b2dadbe ]
Update the Tx/Rx queue setup/stop routines that are unique to DQO,
so that they may be called for instances that use the DQO RDA format
during dev start/stop
Fixes: b044845bb015 ("net/gve: support queue start/stop")
Signed-off-by: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Acked-by: Joshua Washington <joshwash@google.com>
---
drivers/net/gve/gve_ethdev.c | 29 +++++++++++++++++++++++------
1 file changed, 23 insertions(+), 6 deletions(-)
diff --git a/drivers/net/gve/gve_ethdev.c b/drivers/net/gve/gve_ethdev.c
index ecd37ff37f..bd683a64d7 100644
--- a/drivers/net/gve/gve_ethdev.c
+++ b/drivers/net/gve/gve_ethdev.c
@@ -140,11 +140,16 @@ gve_start_queues(struct rte_eth_dev *dev)
PMD_DRV_LOG(ERR, "Failed to create %u tx queues.", num_queues);
return ret;
}
- for (i = 0; i < num_queues; i++)
- if (gve_tx_queue_start(dev, i) != 0) {
+ for (i = 0; i < num_queues; i++) {
+ if (gve_is_gqi(priv))
+ ret = gve_tx_queue_start(dev, i);
+ else
+ ret = gve_tx_queue_start_dqo(dev, i);
+ if (ret != 0) {
PMD_DRV_LOG(ERR, "Fail to start Tx queue %d", i);
goto err_tx;
}
+ }
num_queues = dev->data->nb_rx_queues;
priv->rxqs = (struct gve_rx_queue **)dev->data->rx_queues;
@@ -167,9 +172,15 @@ gve_start_queues(struct rte_eth_dev *dev)
return 0;
err_rx:
- gve_stop_rx_queues(dev);
+ if (gve_is_gqi(priv))
+ gve_stop_rx_queues(dev);
+ else
+ gve_stop_rx_queues_dqo(dev);
err_tx:
- gve_stop_tx_queues(dev);
+ if (gve_is_gqi(priv))
+ gve_stop_tx_queues(dev);
+ else
+ gve_stop_tx_queues_dqo(dev);
return ret;
}
@@ -193,10 +204,16 @@ gve_dev_start(struct rte_eth_dev *dev)
static int
gve_dev_stop(struct rte_eth_dev *dev)
{
+ struct gve_priv *priv = dev->data->dev_private;
dev->data->dev_link.link_status = RTE_ETH_LINK_DOWN;
- gve_stop_tx_queues(dev);
- gve_stop_rx_queues(dev);
+ if (gve_is_gqi(priv)) {
+ gve_stop_tx_queues(dev);
+ gve_stop_rx_queues(dev);
+ } else {
+ gve_stop_tx_queues_dqo(dev);
+ gve_stop_rx_queues_dqo(dev);
+ }
dev->data->dev_started = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.585394669 +0800
+++ 0073-net-gve-fix-queue-setup-and-stop.patch 2024-11-11 14:23:05.172192839 +0800
@@ -1 +1 @@
-From 7174c8891dcfb2a148e03c5fe2f200742b2dadbe Mon Sep 17 00:00:00 2001
+From 33a52ddb8c7ebab75384199c141ce6097dd23855 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7174c8891dcfb2a148e03c5fe2f200742b2dadbe ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index d202940873..db4ebe7036 100644
+index ecd37ff37f..bd683a64d7 100644
@@ -23 +25 @@
-@@ -288,11 +288,16 @@ gve_start_queues(struct rte_eth_dev *dev)
+@@ -140,11 +140,16 @@ gve_start_queues(struct rte_eth_dev *dev)
@@ -42 +44 @@
-@@ -315,9 +320,15 @@ gve_start_queues(struct rte_eth_dev *dev)
+@@ -167,9 +172,15 @@ gve_start_queues(struct rte_eth_dev *dev)
@@ -60 +62 @@
-@@ -362,10 +373,16 @@ gve_dev_start(struct rte_eth_dev *dev)
+@@ -193,10 +204,16 @@ gve_dev_start(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve: fix Tx for chained mbuf' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (72 preceding siblings ...)
2024-11-11 6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
` (46 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Tathagat Priyadarshi
Cc: xuemingl, Varun Lakkur Ambaji Rao, Joshua Washington, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=32d8e5279beaacefe5e7cf91f4d265ac87667ce6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 32d8e5279beaacefe5e7cf91f4d265ac87667ce6 Mon Sep 17 00:00:00 2001
From: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Date: Fri, 2 Aug 2024 05:08:08 +0000
Subject: [PATCH] net/gve: fix Tx for chained mbuf
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 21b1d725e5a6cd38fe28d83c1f6cf00d80643b31 ]
The EOP and CSUM bit was not set for all the packets in mbuf chain
causing packet transmission stalls for packets split across
mbuf in chain.
Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
Signed-off-by: Tathagat Priyadarshi <tathagat.dpdk@gmail.com>
Signed-off-by: Varun Lakkur Ambaji Rao <varun.la@gmail.com>
Acked-by: Joshua Washington <joshwash@google.com>
---
.mailmap | 1 +
drivers/net/gve/gve_ethdev.h | 2 ++
drivers/net/gve/gve_tx_dqo.c | 9 ++++++---
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/.mailmap b/.mailmap
index a72dce1a61..5f2593e00e 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1489,6 +1489,7 @@ Vadim Suraev <vadim.suraev@gmail.com>
Vakul Garg <vakul.garg@nxp.com>
Vamsi Attunuru <vattunuru@marvell.com>
Vanshika Shukla <vanshika.shukla@nxp.com>
+Varun Lakkur Ambaji Rao <varun.la@gmail.com>
Varun Sethi <v.sethi@nxp.com>
Vasily Philipov <vasilyf@mellanox.com>
Veerasenareddy Burru <vburru@marvell.com>
diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
index 58d8943e71..133860488c 100644
--- a/drivers/net/gve/gve_ethdev.h
+++ b/drivers/net/gve/gve_ethdev.h
@@ -33,6 +33,8 @@
RTE_MBUF_F_TX_L4_MASK | \
RTE_MBUF_F_TX_TCP_SEG)
+#define GVE_TX_CKSUM_OFFLOAD_MASK_DQO (GVE_TX_CKSUM_OFFLOAD_MASK | RTE_MBUF_F_TX_IP_CKSUM)
+
/* A list of pages registered with the device during setup and used by a queue
* as buffers
*/
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index 97d9c6549b..b9d6d01749 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -89,6 +89,7 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
uint16_t sw_id;
uint64_t bytes;
uint16_t first_sw_id;
+ uint8_t csum;
sw_ring = txq->sw_ring;
txr = txq->tx_ring;
@@ -114,6 +115,9 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
ol_flags = tx_pkt->ol_flags;
nb_used = tx_pkt->nb_segs;
first_sw_id = sw_id;
+
+ csum = !!(ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO);
+
do {
if (sw_ring[sw_id] != NULL)
PMD_DRV_LOG(DEBUG, "Overwriting an entry in sw_ring");
@@ -126,6 +130,8 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
txd->pkt.dtype = GVE_TX_PKT_DESC_DTYPE_DQO;
txd->pkt.compl_tag = rte_cpu_to_le_16(first_sw_id);
txd->pkt.buf_size = RTE_MIN(tx_pkt->data_len, GVE_TX_MAX_BUF_SIZE_DQO);
+ txd->pkt.end_of_packet = 0;
+ txd->pkt.checksum_offload_enable = csum;
/* size of desc_ring and sw_ring could be different */
tx_id = (tx_id + 1) & mask;
@@ -138,9 +144,6 @@ gve_tx_burst_dqo(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
/* fill the last descriptor with End of Packet (EOP) bit */
txd->pkt.end_of_packet = 1;
- if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK)
- txd->pkt.checksum_offload_enable = 1;
-
txq->nb_free -= nb_used;
txq->nb_used += nb_used;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.630247869 +0800
+++ 0074-net-gve-fix-Tx-for-chained-mbuf.patch 2024-11-11 14:23:05.182192838 +0800
@@ -1 +1 @@
-From 21b1d725e5a6cd38fe28d83c1f6cf00d80643b31 Mon Sep 17 00:00:00 2001
+From 32d8e5279beaacefe5e7cf91f4d265ac87667ce6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 21b1d725e5a6cd38fe28d83c1f6cf00d80643b31 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -17,0 +20 @@
+ drivers/net/gve/gve_ethdev.h | 2 ++
@@ -19 +22 @@
- 2 files changed, 7 insertions(+), 3 deletions(-)
+ 3 files changed, 9 insertions(+), 3 deletions(-)
@@ -22 +25 @@
-index 3abb37673b..995a6f0553 100644
+index a72dce1a61..5f2593e00e 100644
@@ -25 +28,2 @@
-@@ -1553,6 +1553,7 @@ Vakul Garg <vakul.garg@nxp.com>
+@@ -1489,6 +1489,7 @@ Vadim Suraev <vadim.suraev@gmail.com>
+ Vakul Garg <vakul.garg@nxp.com>
@@ -27 +30,0 @@
- Vamsi Krishna Atluri <vamsi.atluri@amd.com>
@@ -32,0 +36,13 @@
+diff --git a/drivers/net/gve/gve_ethdev.h b/drivers/net/gve/gve_ethdev.h
+index 58d8943e71..133860488c 100644
+--- a/drivers/net/gve/gve_ethdev.h
++++ b/drivers/net/gve/gve_ethdev.h
+@@ -33,6 +33,8 @@
+ RTE_MBUF_F_TX_L4_MASK | \
+ RTE_MBUF_F_TX_TCP_SEG)
+
++#define GVE_TX_CKSUM_OFFLOAD_MASK_DQO (GVE_TX_CKSUM_OFFLOAD_MASK | RTE_MBUF_F_TX_IP_CKSUM)
++
+ /* A list of pages registered with the device during setup and used by a queue
+ * as buffers
+ */
@@ -34 +50 @@
-index 1b85557a15..b9d6d01749 100644
+index 97d9c6549b..b9d6d01749 100644
@@ -68 +84 @@
-- if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK_DQO)
+- if (ol_flags & GVE_TX_CKSUM_OFFLOAD_MASK)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/tap: avoid memcpy with null argument' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (73 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
` (45 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4014df8b926a9e46faeff7514cfb76f26fb74eb1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4014df8b926a9e46faeff7514cfb76f26fb74eb1 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 13 Aug 2024 19:34:16 -0700
Subject: [PATCH] net/tap: avoid memcpy with null argument
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3975d85fb8606308ccdb6439b35f70e8733a78e8 ]
Calling memcpy with a null pointer even if zero length is
undefined, so check if data_length is zero.
Problem reported by Gcc analyzer.
Fixes: 7c25284e30c2 ("net/tap: add netlink back-end for flow API")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/tap/tap_netlink.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/net/tap/tap_netlink.c b/drivers/net/tap/tap_netlink.c
index 75af3404b0..c1f7ff56da 100644
--- a/drivers/net/tap/tap_netlink.c
+++ b/drivers/net/tap/tap_netlink.c
@@ -301,7 +301,8 @@ tap_nlattr_add(struct nlmsghdr *nh, unsigned short type,
rta = (struct rtattr *)NLMSG_TAIL(nh);
rta->rta_len = RTA_LENGTH(data_len);
rta->rta_type = type;
- memcpy(RTA_DATA(rta), data, data_len);
+ if (data_len > 0)
+ memcpy(RTA_DATA(rta), data, data_len);
nh->nlmsg_len = NLMSG_ALIGN(nh->nlmsg_len) + RTA_ALIGN(rta->rta_len);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.664473368 +0800
+++ 0075-net-tap-avoid-memcpy-with-null-argument.patch 2024-11-11 14:23:05.182192838 +0800
@@ -1 +1 @@
-From 3975d85fb8606308ccdb6439b35f70e8733a78e8 Mon Sep 17 00:00:00 2001
+From 4014df8b926a9e46faeff7514cfb76f26fb74eb1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3975d85fb8606308ccdb6439b35f70e8733a78e8 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index d9c260127d..35c491ac37 100644
+index 75af3404b0..c1f7ff56da 100644
@@ -23 +25 @@
-@@ -302,7 +302,8 @@ tap_nlattr_add(struct nlmsghdr *nh, unsigned short type,
+@@ -301,7 +301,8 @@ tap_nlattr_add(struct nlmsghdr *nh, unsigned short type,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/testpmd: remove unnecessary cast' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (74 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
` (44 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a32bb4a902153541e12f7aa349260344705e6812
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a32bb4a902153541e12f7aa349260344705e6812 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 23 Aug 2024 09:26:01 -0700
Subject: [PATCH] app/testpmd: remove unnecessary cast
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0a3901aa624a690faa49ca081c468320d4edcb7a ]
The list of builtin cmdline commands has unnecessary cast which
blocks compiler type checking.
Fixes: af75078fece3 ("first public release")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
app/test-pmd/cmdline.c | 456 ++++++++++++++++++++---------------------
1 file changed, 228 insertions(+), 228 deletions(-)
diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c
index d9304e4a32..bf6794ee1d 100644
--- a/app/test-pmd/cmdline.c
+++ b/app/test-pmd/cmdline.c
@@ -13047,240 +13047,240 @@ static cmdline_parse_inst_t cmd_config_tx_affinity_map = {
/* list of instructions */
static cmdline_parse_ctx_t builtin_ctx[] = {
- (cmdline_parse_inst_t *)&cmd_help_brief,
- (cmdline_parse_inst_t *)&cmd_help_long,
- (cmdline_parse_inst_t *)&cmd_quit,
- (cmdline_parse_inst_t *)&cmd_load_from_file,
- (cmdline_parse_inst_t *)&cmd_showport,
- (cmdline_parse_inst_t *)&cmd_showqueue,
- (cmdline_parse_inst_t *)&cmd_showeeprom,
- (cmdline_parse_inst_t *)&cmd_showportall,
- (cmdline_parse_inst_t *)&cmd_representor_info,
- (cmdline_parse_inst_t *)&cmd_showdevice,
- (cmdline_parse_inst_t *)&cmd_showcfg,
- (cmdline_parse_inst_t *)&cmd_showfwdall,
- (cmdline_parse_inst_t *)&cmd_start,
- (cmdline_parse_inst_t *)&cmd_start_tx_first,
- (cmdline_parse_inst_t *)&cmd_start_tx_first_n,
- (cmdline_parse_inst_t *)&cmd_set_link_up,
- (cmdline_parse_inst_t *)&cmd_set_link_down,
- (cmdline_parse_inst_t *)&cmd_reset,
- (cmdline_parse_inst_t *)&cmd_set_numbers,
- (cmdline_parse_inst_t *)&cmd_set_log,
- (cmdline_parse_inst_t *)&cmd_set_rxoffs,
- (cmdline_parse_inst_t *)&cmd_set_rxpkts,
- (cmdline_parse_inst_t *)&cmd_set_rxhdrs,
- (cmdline_parse_inst_t *)&cmd_set_txpkts,
- (cmdline_parse_inst_t *)&cmd_set_txsplit,
- (cmdline_parse_inst_t *)&cmd_set_txtimes,
- (cmdline_parse_inst_t *)&cmd_set_fwd_list,
- (cmdline_parse_inst_t *)&cmd_set_fwd_mask,
- (cmdline_parse_inst_t *)&cmd_set_fwd_mode,
- (cmdline_parse_inst_t *)&cmd_set_fwd_retry_mode,
- (cmdline_parse_inst_t *)&cmd_set_burst_tx_retry,
- (cmdline_parse_inst_t *)&cmd_set_promisc_mode_one,
- (cmdline_parse_inst_t *)&cmd_set_promisc_mode_all,
- (cmdline_parse_inst_t *)&cmd_set_allmulti_mode_one,
- (cmdline_parse_inst_t *)&cmd_set_allmulti_mode_all,
- (cmdline_parse_inst_t *)&cmd_set_flush_rx,
- (cmdline_parse_inst_t *)&cmd_set_link_check,
- (cmdline_parse_inst_t *)&cmd_vlan_offload,
- (cmdline_parse_inst_t *)&cmd_vlan_tpid,
- (cmdline_parse_inst_t *)&cmd_rx_vlan_filter_all,
- (cmdline_parse_inst_t *)&cmd_rx_vlan_filter,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_set,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_set_qinq,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_reset,
- (cmdline_parse_inst_t *)&cmd_tx_vlan_set_pvid,
- (cmdline_parse_inst_t *)&cmd_csum_set,
- (cmdline_parse_inst_t *)&cmd_csum_show,
- (cmdline_parse_inst_t *)&cmd_csum_tunnel,
- (cmdline_parse_inst_t *)&cmd_csum_mac_swap,
- (cmdline_parse_inst_t *)&cmd_tso_set,
- (cmdline_parse_inst_t *)&cmd_tso_show,
- (cmdline_parse_inst_t *)&cmd_tunnel_tso_set,
- (cmdline_parse_inst_t *)&cmd_tunnel_tso_show,
+ &cmd_help_brief,
+ &cmd_help_long,
+ &cmd_quit,
+ &cmd_load_from_file,
+ &cmd_showport,
+ &cmd_showqueue,
+ &cmd_showeeprom,
+ &cmd_showportall,
+ &cmd_representor_info,
+ &cmd_showdevice,
+ &cmd_showcfg,
+ &cmd_showfwdall,
+ &cmd_start,
+ &cmd_start_tx_first,
+ &cmd_start_tx_first_n,
+ &cmd_set_link_up,
+ &cmd_set_link_down,
+ &cmd_reset,
+ &cmd_set_numbers,
+ &cmd_set_log,
+ &cmd_set_rxoffs,
+ &cmd_set_rxpkts,
+ &cmd_set_rxhdrs,
+ &cmd_set_txpkts,
+ &cmd_set_txsplit,
+ &cmd_set_txtimes,
+ &cmd_set_fwd_list,
+ &cmd_set_fwd_mask,
+ &cmd_set_fwd_mode,
+ &cmd_set_fwd_retry_mode,
+ &cmd_set_burst_tx_retry,
+ &cmd_set_promisc_mode_one,
+ &cmd_set_promisc_mode_all,
+ &cmd_set_allmulti_mode_one,
+ &cmd_set_allmulti_mode_all,
+ &cmd_set_flush_rx,
+ &cmd_set_link_check,
+ &cmd_vlan_offload,
+ &cmd_vlan_tpid,
+ &cmd_rx_vlan_filter_all,
+ &cmd_rx_vlan_filter,
+ &cmd_tx_vlan_set,
+ &cmd_tx_vlan_set_qinq,
+ &cmd_tx_vlan_reset,
+ &cmd_tx_vlan_set_pvid,
+ &cmd_csum_set,
+ &cmd_csum_show,
+ &cmd_csum_tunnel,
+ &cmd_csum_mac_swap,
+ &cmd_tso_set,
+ &cmd_tso_show,
+ &cmd_tunnel_tso_set,
+ &cmd_tunnel_tso_show,
#ifdef RTE_LIB_GRO
- (cmdline_parse_inst_t *)&cmd_gro_enable,
- (cmdline_parse_inst_t *)&cmd_gro_flush,
- (cmdline_parse_inst_t *)&cmd_gro_show,
+ &cmd_gro_enable,
+ &cmd_gro_flush,
+ &cmd_gro_show,
#endif
#ifdef RTE_LIB_GSO
- (cmdline_parse_inst_t *)&cmd_gso_enable,
- (cmdline_parse_inst_t *)&cmd_gso_size,
- (cmdline_parse_inst_t *)&cmd_gso_show,
+ &cmd_gso_enable,
+ &cmd_gso_size,
+ &cmd_gso_show,
#endif
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_rx,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_tx,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_hw,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_lw,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_pt,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_xon,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_macfwd,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_set_autoneg,
- (cmdline_parse_inst_t *)&cmd_link_flow_control_show,
- (cmdline_parse_inst_t *)&cmd_priority_flow_control_set,
- (cmdline_parse_inst_t *)&cmd_queue_priority_flow_control_set,
- (cmdline_parse_inst_t *)&cmd_config_dcb,
- (cmdline_parse_inst_t *)&cmd_read_rxd_txd,
- (cmdline_parse_inst_t *)&cmd_stop,
- (cmdline_parse_inst_t *)&cmd_mac_addr,
- (cmdline_parse_inst_t *)&cmd_set_fwd_eth_peer,
- (cmdline_parse_inst_t *)&cmd_set_qmap,
- (cmdline_parse_inst_t *)&cmd_set_xstats_hide_zero,
- (cmdline_parse_inst_t *)&cmd_set_record_core_cycles,
- (cmdline_parse_inst_t *)&cmd_set_record_burst_stats,
- (cmdline_parse_inst_t *)&cmd_operate_port,
- (cmdline_parse_inst_t *)&cmd_operate_specific_port,
- (cmdline_parse_inst_t *)&cmd_operate_attach_port,
- (cmdline_parse_inst_t *)&cmd_operate_detach_port,
- (cmdline_parse_inst_t *)&cmd_operate_detach_device,
- (cmdline_parse_inst_t *)&cmd_set_port_setup_on,
- (cmdline_parse_inst_t *)&cmd_config_speed_all,
- (cmdline_parse_inst_t *)&cmd_config_speed_specific,
- (cmdline_parse_inst_t *)&cmd_config_loopback_all,
- (cmdline_parse_inst_t *)&cmd_config_loopback_specific,
- (cmdline_parse_inst_t *)&cmd_config_rx_tx,
- (cmdline_parse_inst_t *)&cmd_config_mtu,
- (cmdline_parse_inst_t *)&cmd_config_max_pkt_len,
- (cmdline_parse_inst_t *)&cmd_config_max_lro_pkt_size,
- (cmdline_parse_inst_t *)&cmd_config_rx_mode_flag,
- (cmdline_parse_inst_t *)&cmd_config_rss,
- (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size,
- (cmdline_parse_inst_t *)&cmd_config_rxtx_queue,
- (cmdline_parse_inst_t *)&cmd_config_deferred_start_rxtx_queue,
- (cmdline_parse_inst_t *)&cmd_setup_rxtx_queue,
- (cmdline_parse_inst_t *)&cmd_config_rss_reta,
- (cmdline_parse_inst_t *)&cmd_showport_reta,
- (cmdline_parse_inst_t *)&cmd_showport_macs,
- (cmdline_parse_inst_t *)&cmd_show_port_flow_transfer_proxy,
- (cmdline_parse_inst_t *)&cmd_config_burst,
- (cmdline_parse_inst_t *)&cmd_config_thresh,
- (cmdline_parse_inst_t *)&cmd_config_threshold,
- (cmdline_parse_inst_t *)&cmd_set_uc_hash_filter,
- (cmdline_parse_inst_t *)&cmd_set_uc_all_hash_filter,
- (cmdline_parse_inst_t *)&cmd_vf_mac_addr_filter,
- (cmdline_parse_inst_t *)&cmd_queue_rate_limit,
- (cmdline_parse_inst_t *)&cmd_tunnel_udp_config,
- (cmdline_parse_inst_t *)&cmd_showport_rss_hash,
- (cmdline_parse_inst_t *)&cmd_showport_rss_hash_key,
- (cmdline_parse_inst_t *)&cmd_showport_rss_hash_algo,
- (cmdline_parse_inst_t *)&cmd_config_rss_hash_key,
- (cmdline_parse_inst_t *)&cmd_cleanup_txq_mbufs,
- (cmdline_parse_inst_t *)&cmd_dump,
- (cmdline_parse_inst_t *)&cmd_dump_one,
- (cmdline_parse_inst_t *)&cmd_flow,
- (cmdline_parse_inst_t *)&cmd_show_port_meter_cap,
- (cmdline_parse_inst_t *)&cmd_add_port_meter_profile_srtcm,
- (cmdline_parse_inst_t *)&cmd_add_port_meter_profile_trtcm,
- (cmdline_parse_inst_t *)&cmd_add_port_meter_profile_trtcm_rfc4115,
- (cmdline_parse_inst_t *)&cmd_del_port_meter_profile,
- (cmdline_parse_inst_t *)&cmd_create_port_meter,
- (cmdline_parse_inst_t *)&cmd_enable_port_meter,
- (cmdline_parse_inst_t *)&cmd_disable_port_meter,
- (cmdline_parse_inst_t *)&cmd_del_port_meter,
- (cmdline_parse_inst_t *)&cmd_del_port_meter_policy,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_profile,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_dscp_table,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_vlan_table,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_in_proto,
- (cmdline_parse_inst_t *)&cmd_get_port_meter_in_proto,
- (cmdline_parse_inst_t *)&cmd_get_port_meter_in_proto_prio,
- (cmdline_parse_inst_t *)&cmd_set_port_meter_stats_mask,
- (cmdline_parse_inst_t *)&cmd_show_port_meter_stats,
- (cmdline_parse_inst_t *)&cmd_mcast_addr,
- (cmdline_parse_inst_t *)&cmd_mcast_addr_flush,
- (cmdline_parse_inst_t *)&cmd_set_vf_vlan_anti_spoof,
- (cmdline_parse_inst_t *)&cmd_set_vf_mac_anti_spoof,
- (cmdline_parse_inst_t *)&cmd_set_vf_vlan_stripq,
- (cmdline_parse_inst_t *)&cmd_set_vf_vlan_insert,
- (cmdline_parse_inst_t *)&cmd_set_tx_loopback,
- (cmdline_parse_inst_t *)&cmd_set_all_queues_drop_en,
- (cmdline_parse_inst_t *)&cmd_set_vf_traffic,
- (cmdline_parse_inst_t *)&cmd_set_vf_rxmode,
- (cmdline_parse_inst_t *)&cmd_vf_rate_limit,
- (cmdline_parse_inst_t *)&cmd_vf_rxvlan_filter,
- (cmdline_parse_inst_t *)&cmd_set_vf_mac_addr,
- (cmdline_parse_inst_t *)&cmd_set_vxlan,
- (cmdline_parse_inst_t *)&cmd_set_vxlan_tos_ttl,
- (cmdline_parse_inst_t *)&cmd_set_vxlan_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_nvgre,
- (cmdline_parse_inst_t *)&cmd_set_nvgre_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_l2_encap,
- (cmdline_parse_inst_t *)&cmd_set_l2_encap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_l2_decap,
- (cmdline_parse_inst_t *)&cmd_set_l2_decap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_encap,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_encap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_decap,
- (cmdline_parse_inst_t *)&cmd_set_mplsogre_decap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_encap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap,
- (cmdline_parse_inst_t *)&cmd_set_mplsoudp_decap_with_vlan,
- (cmdline_parse_inst_t *)&cmd_set_conntrack_common,
- (cmdline_parse_inst_t *)&cmd_set_conntrack_dir,
- (cmdline_parse_inst_t *)&cmd_show_vf_stats,
- (cmdline_parse_inst_t *)&cmd_clear_vf_stats,
- (cmdline_parse_inst_t *)&cmd_show_port_supported_ptypes,
- (cmdline_parse_inst_t *)&cmd_set_port_ptypes,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_cap,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_level_cap,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_node_cap,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_node_type,
- (cmdline_parse_inst_t *)&cmd_show_port_tm_node_stats,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_node_shaper_profile,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node_shaper_profile,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_node_shared_shaper,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node_shared_shaper,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_node_wred_profile,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node_wred_profile,
- (cmdline_parse_inst_t *)&cmd_set_port_tm_node_shaper_profile,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_nonleaf_node_pmode,
- (cmdline_parse_inst_t *)&cmd_add_port_tm_leaf_node,
- (cmdline_parse_inst_t *)&cmd_del_port_tm_node,
- (cmdline_parse_inst_t *)&cmd_set_port_tm_node_parent,
- (cmdline_parse_inst_t *)&cmd_suspend_port_tm_node,
- (cmdline_parse_inst_t *)&cmd_resume_port_tm_node,
- (cmdline_parse_inst_t *)&cmd_port_tm_hierarchy_commit,
- (cmdline_parse_inst_t *)&cmd_port_tm_mark_ip_ecn,
- (cmdline_parse_inst_t *)&cmd_port_tm_mark_ip_dscp,
- (cmdline_parse_inst_t *)&cmd_port_tm_mark_vlan_dei,
- (cmdline_parse_inst_t *)&cmd_cfg_tunnel_udp_port,
- (cmdline_parse_inst_t *)&cmd_rx_offload_get_capa,
- (cmdline_parse_inst_t *)&cmd_rx_offload_get_configuration,
- (cmdline_parse_inst_t *)&cmd_config_per_port_rx_offload,
- (cmdline_parse_inst_t *)&cmd_config_all_port_rx_offload,
- (cmdline_parse_inst_t *)&cmd_config_per_queue_rx_offload,
- (cmdline_parse_inst_t *)&cmd_tx_offload_get_capa,
- (cmdline_parse_inst_t *)&cmd_tx_offload_get_configuration,
- (cmdline_parse_inst_t *)&cmd_config_per_port_tx_offload,
- (cmdline_parse_inst_t *)&cmd_config_all_port_tx_offload,
- (cmdline_parse_inst_t *)&cmd_config_per_queue_tx_offload,
+ &cmd_link_flow_control_set,
+ &cmd_link_flow_control_set_rx,
+ &cmd_link_flow_control_set_tx,
+ &cmd_link_flow_control_set_hw,
+ &cmd_link_flow_control_set_lw,
+ &cmd_link_flow_control_set_pt,
+ &cmd_link_flow_control_set_xon,
+ &cmd_link_flow_control_set_macfwd,
+ &cmd_link_flow_control_set_autoneg,
+ &cmd_link_flow_control_show,
+ &cmd_priority_flow_control_set,
+ &cmd_queue_priority_flow_control_set,
+ &cmd_config_dcb,
+ &cmd_read_rxd_txd,
+ &cmd_stop,
+ &cmd_mac_addr,
+ &cmd_set_fwd_eth_peer,
+ &cmd_set_qmap,
+ &cmd_set_xstats_hide_zero,
+ &cmd_set_record_core_cycles,
+ &cmd_set_record_burst_stats,
+ &cmd_operate_port,
+ &cmd_operate_specific_port,
+ &cmd_operate_attach_port,
+ &cmd_operate_detach_port,
+ &cmd_operate_detach_device,
+ &cmd_set_port_setup_on,
+ &cmd_config_speed_all,
+ &cmd_config_speed_specific,
+ &cmd_config_loopback_all,
+ &cmd_config_loopback_specific,
+ &cmd_config_rx_tx,
+ &cmd_config_mtu,
+ &cmd_config_max_pkt_len,
+ &cmd_config_max_lro_pkt_size,
+ &cmd_config_rx_mode_flag,
+ &cmd_config_rss,
+ &cmd_config_rxtx_ring_size,
+ &cmd_config_rxtx_queue,
+ &cmd_config_deferred_start_rxtx_queue,
+ &cmd_setup_rxtx_queue,
+ &cmd_config_rss_reta,
+ &cmd_showport_reta,
+ &cmd_showport_macs,
+ &cmd_show_port_flow_transfer_proxy,
+ &cmd_config_burst,
+ &cmd_config_thresh,
+ &cmd_config_threshold,
+ &cmd_set_uc_hash_filter,
+ &cmd_set_uc_all_hash_filter,
+ &cmd_vf_mac_addr_filter,
+ &cmd_queue_rate_limit,
+ &cmd_tunnel_udp_config,
+ &cmd_showport_rss_hash,
+ &cmd_showport_rss_hash_key,
+ &cmd_showport_rss_hash_algo,
+ &cmd_config_rss_hash_key,
+ &cmd_cleanup_txq_mbufs,
+ &cmd_dump,
+ &cmd_dump_one,
+ &cmd_flow,
+ &cmd_show_port_meter_cap,
+ &cmd_add_port_meter_profile_srtcm,
+ &cmd_add_port_meter_profile_trtcm,
+ &cmd_add_port_meter_profile_trtcm_rfc4115,
+ &cmd_del_port_meter_profile,
+ &cmd_create_port_meter,
+ &cmd_enable_port_meter,
+ &cmd_disable_port_meter,
+ &cmd_del_port_meter,
+ &cmd_del_port_meter_policy,
+ &cmd_set_port_meter_profile,
+ &cmd_set_port_meter_dscp_table,
+ &cmd_set_port_meter_vlan_table,
+ &cmd_set_port_meter_in_proto,
+ &cmd_get_port_meter_in_proto,
+ &cmd_get_port_meter_in_proto_prio,
+ &cmd_set_port_meter_stats_mask,
+ &cmd_show_port_meter_stats,
+ &cmd_mcast_addr,
+ &cmd_mcast_addr_flush,
+ &cmd_set_vf_vlan_anti_spoof,
+ &cmd_set_vf_mac_anti_spoof,
+ &cmd_set_vf_vlan_stripq,
+ &cmd_set_vf_vlan_insert,
+ &cmd_set_tx_loopback,
+ &cmd_set_all_queues_drop_en,
+ &cmd_set_vf_traffic,
+ &cmd_set_vf_rxmode,
+ &cmd_vf_rate_limit,
+ &cmd_vf_rxvlan_filter,
+ &cmd_set_vf_mac_addr,
+ &cmd_set_vxlan,
+ &cmd_set_vxlan_tos_ttl,
+ &cmd_set_vxlan_with_vlan,
+ &cmd_set_nvgre,
+ &cmd_set_nvgre_with_vlan,
+ &cmd_set_l2_encap,
+ &cmd_set_l2_encap_with_vlan,
+ &cmd_set_l2_decap,
+ &cmd_set_l2_decap_with_vlan,
+ &cmd_set_mplsogre_encap,
+ &cmd_set_mplsogre_encap_with_vlan,
+ &cmd_set_mplsogre_decap,
+ &cmd_set_mplsogre_decap_with_vlan,
+ &cmd_set_mplsoudp_encap,
+ &cmd_set_mplsoudp_encap_with_vlan,
+ &cmd_set_mplsoudp_decap,
+ &cmd_set_mplsoudp_decap_with_vlan,
+ &cmd_set_conntrack_common,
+ &cmd_set_conntrack_dir,
+ &cmd_show_vf_stats,
+ &cmd_clear_vf_stats,
+ &cmd_show_port_supported_ptypes,
+ &cmd_set_port_ptypes,
+ &cmd_show_port_tm_cap,
+ &cmd_show_port_tm_level_cap,
+ &cmd_show_port_tm_node_cap,
+ &cmd_show_port_tm_node_type,
+ &cmd_show_port_tm_node_stats,
+ &cmd_add_port_tm_node_shaper_profile,
+ &cmd_del_port_tm_node_shaper_profile,
+ &cmd_add_port_tm_node_shared_shaper,
+ &cmd_del_port_tm_node_shared_shaper,
+ &cmd_add_port_tm_node_wred_profile,
+ &cmd_del_port_tm_node_wred_profile,
+ &cmd_set_port_tm_node_shaper_profile,
+ &cmd_add_port_tm_nonleaf_node,
+ &cmd_add_port_tm_nonleaf_node_pmode,
+ &cmd_add_port_tm_leaf_node,
+ &cmd_del_port_tm_node,
+ &cmd_set_port_tm_node_parent,
+ &cmd_suspend_port_tm_node,
+ &cmd_resume_port_tm_node,
+ &cmd_port_tm_hierarchy_commit,
+ &cmd_port_tm_mark_ip_ecn,
+ &cmd_port_tm_mark_ip_dscp,
+ &cmd_port_tm_mark_vlan_dei,
+ &cmd_cfg_tunnel_udp_port,
+ &cmd_rx_offload_get_capa,
+ &cmd_rx_offload_get_configuration,
+ &cmd_config_per_port_rx_offload,
+ &cmd_config_all_port_rx_offload,
+ &cmd_config_per_queue_rx_offload,
+ &cmd_tx_offload_get_capa,
+ &cmd_tx_offload_get_configuration,
+ &cmd_config_per_port_tx_offload,
+ &cmd_config_all_port_tx_offload,
+ &cmd_config_per_queue_tx_offload,
#ifdef RTE_LIB_BPF
- (cmdline_parse_inst_t *)&cmd_operate_bpf_ld_parse,
- (cmdline_parse_inst_t *)&cmd_operate_bpf_unld_parse,
+ &cmd_operate_bpf_ld_parse,
+ &cmd_operate_bpf_unld_parse,
#endif
- (cmdline_parse_inst_t *)&cmd_config_tx_metadata_specific,
- (cmdline_parse_inst_t *)&cmd_show_tx_metadata,
- (cmdline_parse_inst_t *)&cmd_show_rx_tx_desc_status,
- (cmdline_parse_inst_t *)&cmd_show_rx_queue_desc_used_count,
- (cmdline_parse_inst_t *)&cmd_set_raw,
- (cmdline_parse_inst_t *)&cmd_show_set_raw,
- (cmdline_parse_inst_t *)&cmd_show_set_raw_all,
- (cmdline_parse_inst_t *)&cmd_config_tx_dynf_specific,
- (cmdline_parse_inst_t *)&cmd_show_fec_mode,
- (cmdline_parse_inst_t *)&cmd_set_fec_mode,
- (cmdline_parse_inst_t *)&cmd_set_rxq_avail_thresh,
- (cmdline_parse_inst_t *)&cmd_show_capability,
- (cmdline_parse_inst_t *)&cmd_set_flex_is_pattern,
- (cmdline_parse_inst_t *)&cmd_set_flex_spec_pattern,
- (cmdline_parse_inst_t *)&cmd_show_port_cman_capa,
- (cmdline_parse_inst_t *)&cmd_show_port_cman_config,
- (cmdline_parse_inst_t *)&cmd_set_port_cman_config,
- (cmdline_parse_inst_t *)&cmd_config_tx_affinity_map,
+ &cmd_config_tx_metadata_specific,
+ &cmd_show_tx_metadata,
+ &cmd_show_rx_tx_desc_status,
+ &cmd_show_rx_queue_desc_used_count,
+ &cmd_set_raw,
+ &cmd_show_set_raw,
+ &cmd_show_set_raw_all,
+ &cmd_config_tx_dynf_specific,
+ &cmd_show_fec_mode,
+ &cmd_set_fec_mode,
+ &cmd_set_rxq_avail_thresh,
+ &cmd_show_capability,
+ &cmd_set_flex_is_pattern,
+ &cmd_set_flex_spec_pattern,
+ &cmd_show_port_cman_capa,
+ &cmd_show_port_cman_config,
+ &cmd_set_port_cman_config,
+ &cmd_config_tx_affinity_map,
NULL,
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.700532467 +0800
+++ 0076-app-testpmd-remove-unnecessary-cast.patch 2024-11-11 14:23:05.192192838 +0800
@@ -1 +1 @@
-From 0a3901aa624a690faa49ca081c468320d4edcb7a Mon Sep 17 00:00:00 2001
+From a32bb4a902153541e12f7aa349260344705e6812 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0a3901aa624a690faa49ca081c468320d4edcb7a ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -15,2 +17,2 @@
- app/test-pmd/cmdline.c | 458 ++++++++++++++++++++---------------------
- 1 file changed, 229 insertions(+), 229 deletions(-)
+ app/test-pmd/cmdline.c | 456 ++++++++++++++++++++---------------------
+ 1 file changed, 228 insertions(+), 228 deletions(-)
@@ -19 +21 @@
-index b7759e38a8..358319c20a 100644
+index d9304e4a32..bf6794ee1d 100644
@@ -22 +24 @@
-@@ -13146,241 +13146,241 @@ static cmdline_parse_inst_t cmd_config_tx_affinity_map = {
+@@ -13047,240 +13047,240 @@ static cmdline_parse_inst_t cmd_config_tx_affinity_map = {
@@ -205 +206,0 @@
-- (cmdline_parse_inst_t *)&cmd_config_rss_hash_algo,
@@ -355 +355,0 @@
-+ &cmd_config_rss_hash_algo,
@@ -457 +457 @@
-- (cmdline_parse_inst_t *)&cmd_show_rx_tx_queue_desc_used_count,
+- (cmdline_parse_inst_t *)&cmd_show_rx_queue_desc_used_count,
@@ -475 +475 @@
-+ &cmd_show_rx_tx_queue_desc_used_count,
++ &cmd_show_rx_queue_desc_used_count,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/pcap: set live interface as non-blocking' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (75 preceding siblings ...)
2024-11-11 6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
` (43 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ofer Dagan, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=84123ec794ad3a8c2796fe344030ce8fb9c32e67
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 84123ec794ad3a8c2796fe344030ce8fb9c32e67 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Sat, 24 Aug 2024 11:07:10 -0700
Subject: [PATCH] net/pcap: set live interface as non-blocking
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 60dd5a70035f447104d457aa338557fb58d5cb06 ]
The DPDK PMD's are supposed to be non-blocking and poll for packets.
Configure PCAP to do this on live interface.
Bugzilla ID: 1526
Fixes: 4c173302c307 ("pcap: add new driver")
Reported-by: Ofer Dagan <ofer.d@claroty.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/pcap/pcap_ethdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index 9626c343dc..1fb98e3d2b 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -522,6 +522,12 @@ open_iface_live(const char *iface, pcap_t **pcap) {
return -1;
}
+ if (pcap_setnonblock(*pcap, 1, errbuf)) {
+ PMD_LOG(ERR, "Couldn't set non-blocking on %s: %s", iface, errbuf);
+ pcap_close(*pcap);
+ return -1;
+ }
+
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.754897266 +0800
+++ 0077-net-pcap-set-live-interface-as-non-blocking.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From 60dd5a70035f447104d457aa338557fb58d5cb06 Mon Sep 17 00:00:00 2001
+From 84123ec794ad3a8c2796fe344030ce8fb9c32e67 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 60dd5a70035f447104d457aa338557fb58d5cb06 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mana: support rdma-core via pkg-config' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (76 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
` (42 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Shreesh Adiga; +Cc: xuemingl, Long Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9477687ed2ef8e7c2588397a9542fc5ccfc1df40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9477687ed2ef8e7c2588397a9542fc5ccfc1df40 Mon Sep 17 00:00:00 2001
From: Shreesh Adiga <16567adigashreesh@gmail.com>
Date: Fri, 20 Sep 2024 16:41:16 +0530
Subject: [PATCH] net/mana: support rdma-core via pkg-config
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8d7596cad7abb413c25f6782fe62fd0d388b8b94 ]
Currently building with custom rdma-core installed in /opt/rdma-core
after setting PKG_CONFIG_PATH=/opt/rdma-core/lib64/pkgconfig/ results
in the below meson logs:
Run-time dependency libmana found: YES 1.0.54.0
Header "infiniband/manadv.h" has symbol "manadv_set_context_attr" : NO
Thus to fix this, the libs is updated in meson.build and is passed to
the cc.has_header_symbol call using dependencies. After this change,
the libmana header files are getting included and net/mana is
successfully enabled.
Fixes: 517ed6e2d590 ("net/mana: add basic driver with build environment")
Signed-off-by: Shreesh Adiga <16567adigashreesh@gmail.com>
Acked-by: Long Li <longli@microsoft.com>
---
drivers/net/mana/meson.build | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mana/meson.build b/drivers/net/mana/meson.build
index 2d72eca5a8..3ddc230ab4 100644
--- a/drivers/net/mana/meson.build
+++ b/drivers/net/mana/meson.build
@@ -19,12 +19,14 @@ sources += files(
)
libnames = ['ibverbs', 'mana']
+libs = []
foreach libname:libnames
lib = dependency('lib' + libname, required:false)
if not lib.found()
lib = cc.find_library(libname, required:false)
endif
if lib.found()
+ libs += lib
ext_deps += lib
else
build = false
@@ -43,7 +45,7 @@ required_symbols = [
]
foreach arg:required_symbols
- if not cc.has_header_symbol(arg[0], arg[1])
+ if not cc.has_header_symbol(arg[0], arg[1], dependencies: libs, args: cflags)
build = false
reason = 'missing symbol "' + arg[1] + '" in "' + arg[0] + '"'
subdir_done()
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.786021665 +0800
+++ 0078-net-mana-support-rdma-core-via-pkg-config.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From 8d7596cad7abb413c25f6782fe62fd0d388b8b94 Mon Sep 17 00:00:00 2001
+From 9477687ed2ef8e7c2588397a9542fc5ccfc1df40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8d7596cad7abb413c25f6782fe62fd0d388b8b94 ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 330d30b2ff..4d163fc0f2 100644
+index 2d72eca5a8..3ddc230ab4 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ena: revert redefining memcpy' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (77 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
` (41 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger
Cc: xuemingl, Wathsala Vithanage, Morten Brørup, Tyler Retzlaff,
Shai Brandes, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ac6feb7694793f1d084c443a5788725e0bb9630e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ac6feb7694793f1d084c443a5788725e0bb9630e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Mon, 12 Aug 2024 08:34:17 -0700
Subject: [PATCH] net/ena: revert redefining memcpy
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 966764d003554b38e892cf18df9e9af44483036d ]
Redefining memcpy as rte_memcpy has no performance gain on
current compilers, and introduced bugs like this one where
rte_memcpy() will be detected as referencing past the destination.
Bugzilla ID: 1510
Fixes: 142778b3702a ("net/ena: switch memcpy to optimized version")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Wathsala Vithanage <wathsala.vithanage@arm.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Shai Brandes <shaibran@amazon.com>
---
drivers/net/ena/base/ena_plat_dpdk.h | 10 +---------
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h
index 665ac2f0cc..ba4a525898 100644
--- a/drivers/net/ena/base/ena_plat_dpdk.h
+++ b/drivers/net/ena/base/ena_plat_dpdk.h
@@ -26,7 +26,6 @@
#include <rte_spinlock.h>
#include <sys/time.h>
-#include <rte_memcpy.h>
typedef uint64_t u64;
typedef uint32_t u32;
@@ -70,14 +69,7 @@ typedef uint64_t dma_addr_t;
#define ENA_UDELAY(x) rte_delay_us_block(x)
#define ENA_TOUCH(x) ((void)(x))
-/* Redefine memcpy with caution: rte_memcpy can be simply aliased to memcpy, so
- * make the redefinition only if it's safe (and beneficial) to do so.
- */
-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64_MEMCPY) || \
- defined(RTE_ARCH_ARM_NEON_MEMCPY)
-#undef memcpy
-#define memcpy rte_memcpy
-#endif
+
#define wmb rte_wmb
#define rmb rte_rmb
#define mb rte_mb
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.821629465 +0800
+++ 0079-net-ena-revert-redefining-memcpy.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From 966764d003554b38e892cf18df9e9af44483036d Mon Sep 17 00:00:00 2001
+From ac6feb7694793f1d084c443a5788725e0bb9630e Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 966764d003554b38e892cf18df9e9af44483036d ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -23,2 +25,2 @@
- drivers/net/ena/base/ena_plat_dpdk.h | 9 +--------
- 1 file changed, 1 insertion(+), 8 deletions(-)
+ drivers/net/ena/base/ena_plat_dpdk.h | 10 +---------
+ 1 file changed, 1 insertion(+), 9 deletions(-)
@@ -27 +29 @@
-index a41a4e4506..1121460470 100644
+index 665ac2f0cc..ba4a525898 100644
@@ -38 +40 @@
-@@ -68,13 +67,7 @@ typedef uint64_t dma_addr_t;
+@@ -70,14 +69,7 @@ typedef uint64_t dma_addr_t;
@@ -45 +47,2 @@
--#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64_MEMCPY) || defined(RTE_ARCH_ARM_NEON_MEMCPY)
+-#if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64_MEMCPY) || \
+- defined(RTE_ARCH_ARM_NEON_MEMCPY)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: remove some basic address dump' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (78 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
` (40 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Jie Hai; +Cc: xuemingl, Huisong Li, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c8c9ae34ea6ded0a1b2b1ad43b98f567996352a8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c8c9ae34ea6ded0a1b2b1ad43b98f567996352a8 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 26 Sep 2024 20:42:44 +0800
Subject: [PATCH] net/hns3: remove some basic address dump
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c8b7bec0ef23f53303c9cf03cfea44f1eb208738 ]
For security reasons, some address registers are not suitable
to be exposed, remove them.
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_regs.c | 12 ++----------
1 file changed, 2 insertions(+), 10 deletions(-)
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index 955bc7e3af..8793f61153 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -17,13 +17,9 @@
static int hns3_get_dfx_reg_line(struct hns3_hw *hw, uint32_t *lines);
-static const uint32_t cmdq_reg_addrs[] = {HNS3_CMDQ_TX_ADDR_L_REG,
- HNS3_CMDQ_TX_ADDR_H_REG,
- HNS3_CMDQ_TX_DEPTH_REG,
+static const uint32_t cmdq_reg_addrs[] = {HNS3_CMDQ_TX_DEPTH_REG,
HNS3_CMDQ_TX_TAIL_REG,
HNS3_CMDQ_TX_HEAD_REG,
- HNS3_CMDQ_RX_ADDR_L_REG,
- HNS3_CMDQ_RX_ADDR_H_REG,
HNS3_CMDQ_RX_DEPTH_REG,
HNS3_CMDQ_RX_TAIL_REG,
HNS3_CMDQ_RX_HEAD_REG,
@@ -44,9 +40,7 @@ static const uint32_t common_vf_reg_addrs[] = {HNS3_MISC_VECTOR_REG_BASE,
HNS3_FUN_RST_ING,
HNS3_GRO_EN_REG};
-static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BASEADDR_L_REG,
- HNS3_RING_RX_BASEADDR_H_REG,
- HNS3_RING_RX_BD_NUM_REG,
+static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BD_NUM_REG,
HNS3_RING_RX_BD_LEN_REG,
HNS3_RING_RX_EN_REG,
HNS3_RING_RX_MERGE_EN_REG,
@@ -57,8 +51,6 @@ static const uint32_t ring_reg_addrs[] = {HNS3_RING_RX_BASEADDR_L_REG,
HNS3_RING_RX_FBD_OFFSET_REG,
HNS3_RING_RX_STASH_REG,
HNS3_RING_RX_BD_ERR_REG,
- HNS3_RING_TX_BASEADDR_L_REG,
- HNS3_RING_TX_BASEADDR_H_REG,
HNS3_RING_TX_BD_NUM_REG,
HNS3_RING_TX_EN_REG,
HNS3_RING_TX_PRIORITY_REG,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.855073064 +0800
+++ 0080-net-hns3-remove-some-basic-address-dump.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From c8b7bec0ef23f53303c9cf03cfea44f1eb208738 Mon Sep 17 00:00:00 2001
+From c8c9ae34ea6ded0a1b2b1ad43b98f567996352a8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c8b7bec0ef23f53303c9cf03cfea44f1eb208738 ]
@@ -8,2 +10,0 @@
-
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: fix dump counter of registers' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (79 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
` (39 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Jie Hai; +Cc: xuemingl, Huisong Li, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0feee3f5727cb2f849337b091252c6e67b100ba5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0feee3f5727cb2f849337b091252c6e67b100ba5 Mon Sep 17 00:00:00 2001
From: Jie Hai <haijie1@huawei.com>
Date: Thu, 26 Sep 2024 20:42:45 +0800
Subject: [PATCH] net/hns3: fix dump counter of registers
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e9b82b4d54c019973ffcb5f404ba920494f70513 ]
Since the driver dumps the queue interrupt registers according
to the intr_tqps_num, the counter should be the same.
Fixes: acb3260fac5c ("net/hns3: fix dump register out of range")
Fixes: 936eda25e8da ("net/hns3: support dump register")
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/net/hns3/hns3_regs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_regs.c b/drivers/net/hns3/hns3_regs.c
index 8793f61153..8c0c0a3027 100644
--- a/drivers/net/hns3/hns3_regs.c
+++ b/drivers/net/hns3/hns3_regs.c
@@ -127,7 +127,7 @@ hns3_get_regs_length(struct hns3_hw *hw, uint32_t *length)
tqp_intr_lines = sizeof(tqp_intr_reg_addrs) / REG_LEN_PER_LINE + 1;
len = (cmdq_lines + common_lines + ring_lines * hw->tqps_num +
- tqp_intr_lines * hw->num_msi) * REG_NUM_PER_LINE;
+ tqp_intr_lines * hw->intr_tqps_num) * REG_NUM_PER_LINE;
if (!hns->is_vf) {
ret = hns3_get_regs_num(hw, ®s_num_32_bit, ®s_num_64_bit);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.892376163 +0800
+++ 0081-net-hns3-fix-dump-counter-of-registers.patch 2024-11-11 14:23:05.202192838 +0800
@@ -1 +1 @@
-From e9b82b4d54c019973ffcb5f404ba920494f70513 Mon Sep 17 00:00:00 2001
+From 0feee3f5727cb2f849337b091252c6e67b100ba5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e9b82b4d54c019973ffcb5f404ba920494f70513 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'ethdev: fix overflow in descriptor count' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (80 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
` (38 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Niall Meade; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c795236ed2bc295e4ff8c8d8eafcf81f2b96c6dd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c795236ed2bc295e4ff8c8d8eafcf81f2b96c6dd Mon Sep 17 00:00:00 2001
From: Niall Meade <niall.meade@intel.com>
Date: Mon, 30 Sep 2024 13:40:02 +0000
Subject: [PATCH] ethdev: fix overflow in descriptor count
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 30efe60d3a37896567b660229ef6a04c5526f6db ]
Addressed a specific overflow issue in the eth_dev_adjust_nb_desc()
function where the uint16_t variable nb_desc would overflow when its
value was greater than (2^16 - nb_align). This overflow caused nb_desc
to incorrectly wrap around between 0 and nb_align-1, leading to the
function setting nb_desc to nb_min instead of the expected nb_max.
To give an example, let nb_desc=UINT16_MAX, nb_align=32, nb_max=4096 and
nb_min=64. RTE_ALIGN_CEIL(nb_desc, nb_align) calls
RTE_ALIGN_FLOOR(nb_desc + nb_align - 1, nb_align). This results in an
overflow of nb_desc, leading to nb_desc being set to 30 and then 0 when
the macros return. As a result of this, nb_desc is then set to nb_min
later on.
The resolution involves upcasting nb_desc to a uint32_t before the
RTE_ALIGN_CEIL macro is applied. This change ensures that the subsequent
call to RTE_ALIGN_FLOOR(nb_desc + (nb_align - 1), nb_align) does not
result in an overflow, as it would when nb_desc is a uint16_t. By using
a uint32_t for these operations, the correct behavior is maintained
without the risk of overflow.
Fixes: 0f67fc3baeb9 ("ethdev: add function to adjust number of descriptors")
Signed-off-by: Niall Meade <niall.meade@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
.mailmap | 1 +
lib/ethdev/rte_ethdev.c | 12 +++++++++---
2 files changed, 10 insertions(+), 3 deletions(-)
diff --git a/.mailmap b/.mailmap
index 5f2593e00e..674384cfc5 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1029,6 +1029,7 @@ Nelson Escobar <neescoba@cisco.com>
Nemanja Marjanovic <nemanja.marjanovic@intel.com>
Netanel Belgazal <netanel@amazon.com>
Netanel Gonen <netanelg@mellanox.com>
+Niall Meade <niall.meade@intel.com>
Niall Power <niall.power@intel.com>
Nicholas Pratte <npratte@iol.unh.edu>
Nick Connolly <nick.connolly@arm.com> <nick.connolly@mayadata.io>
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index b9d99ece15..1f067873a9 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -6556,13 +6556,19 @@ static void
eth_dev_adjust_nb_desc(uint16_t *nb_desc,
const struct rte_eth_desc_lim *desc_lim)
{
+ /* Upcast to uint32 to avoid potential overflow with RTE_ALIGN_CEIL(). */
+ uint32_t nb_desc_32 = (uint32_t)*nb_desc;
+
if (desc_lim->nb_align != 0)
- *nb_desc = RTE_ALIGN_CEIL(*nb_desc, desc_lim->nb_align);
+ nb_desc_32 = RTE_ALIGN_CEIL(nb_desc_32, desc_lim->nb_align);
if (desc_lim->nb_max != 0)
- *nb_desc = RTE_MIN(*nb_desc, desc_lim->nb_max);
+ nb_desc_32 = RTE_MIN(nb_desc_32, desc_lim->nb_max);
+
+ nb_desc_32 = RTE_MAX(nb_desc_32, desc_lim->nb_min);
- *nb_desc = RTE_MAX(*nb_desc, desc_lim->nb_min);
+ /* Assign clipped u32 back to u16. */
+ *nb_desc = (uint16_t)nb_desc_32;
}
int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.921019263 +0800
+++ 0082-ethdev-fix-overflow-in-descriptor-count.patch 2024-11-11 14:23:05.212192838 +0800
@@ -1 +1 @@
-From 30efe60d3a37896567b660229ef6a04c5526f6db Mon Sep 17 00:00:00 2001
+From c795236ed2bc295e4ff8c8d8eafcf81f2b96c6dd Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 30efe60d3a37896567b660229ef6a04c5526f6db ]
@@ -27 +29,0 @@
-Cc: stable@dpdk.org
@@ -37 +39 @@
-index d49838320e..6e72362ebc 100644
+index 5f2593e00e..674384cfc5 100644
@@ -40 +42 @@
-@@ -1070,6 +1070,7 @@ Nelson Escobar <neescoba@cisco.com>
+@@ -1029,6 +1029,7 @@ Nelson Escobar <neescoba@cisco.com>
@@ -49 +51 @@
-index a1f7efa913..84ee7588fc 100644
+index b9d99ece15..1f067873a9 100644
@@ -52 +54 @@
-@@ -6667,13 +6667,19 @@ static void
+@@ -6556,13 +6556,19 @@ static void
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (81 preceding siblings ...)
2024-11-11 6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
` (37 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Gagandeep Singh; +Cc: xuemingl, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=508ee4007a756e0eea944d532e3edb31135b31d9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 508ee4007a756e0eea944d532e3edb31135b31d9 Mon Sep 17 00:00:00 2001
From: Gagandeep Singh <g.singh@nxp.com>
Date: Tue, 1 Oct 2024 16:33:08 +0530
Subject: [PATCH] bus/dpaa: fix PFDRs leaks due to FQRNIs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b292acc3c4a8fd5104cfdfa5c6d3d0df95b6543b ]
When a Retire FQ command is executed on a FQ in the
Tentatively Scheduled or Parked states, in that case FQ
is retired immediately and a FQRNI (Frame Queue Retirement
Notification Immediate) message is generated. Software
must read this message from MR and consume it to free
the memory used by it.
Although it is not mentioned about which memory to be used
by FQRNIs in the RM but through experiments it is proven
that it can use PFDRs. So if these messages are allowed to
build up indefinitely then PFDR resources can become exhausted
and cause enqueues to stall. Therefore software must consume
these MR messages on a regular basis to avoid depleting
the available PFDR resources.
This is the PFDRs leak issue which user can experience while
using the DPDK crypto driver and creating and destroying the
sessions multiple times. On a session destroy, DPDK calls the
qman_retire_fq() for each FQ used by the session, but it does
not handle the FQRNIs generated and allowed them to build up
indefinitely in MR.
This patch fixes this issue by consuming the FQRNIs received
from MR immediately after FQ retire by calling drain_mr_fqrni().
Please note that this drain_mr_fqrni() only look for
FQRNI type messages to consume. If there are other type of messages
like FQRN, FQRL, FQPN, ERN etc. also coming on MR then those
messages need to be handled separately.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 46 ++++++++++++++++--------------
1 file changed, 25 insertions(+), 21 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index 83db0a534e..f06992ca48 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -294,10 +294,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
qm_dqrr_set_maxfill(&p->p, 0);
}
+static inline void qm_mr_pvb_update(struct qm_portal *portal)
+{
+ register struct qm_mr *mr = &portal->mr;
+ const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
+
+#ifdef RTE_LIBRTE_DPAA_HWDEBUG
+ DPAA_ASSERT(mr->pmode == qm_mr_pvb);
+#endif
+ /* when accessing 'verb', use __raw_readb() to ensure that compiler
+ * inlining doesn't try to optimise out "excess reads".
+ */
+ if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
+ mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
+ if (!mr->pi)
+ mr->vbit ^= QM_MR_VERB_VBIT;
+ mr->fill++;
+ res = MR_INC(res);
+ }
+ dcbit_ro(res);
+}
+
static int drain_mr_fqrni(struct qm_portal *p)
{
const struct qm_mr_entry *msg;
loop:
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg) {
/*
@@ -319,6 +341,7 @@ loop:
do {
now = mfatb();
} while ((then + 10000) > now);
+ qm_mr_pvb_update(p);
msg = qm_mr_current(p);
if (!msg)
return 0;
@@ -481,27 +504,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
return 0;
}
-static inline void qm_mr_pvb_update(struct qm_portal *portal)
-{
- register struct qm_mr *mr = &portal->mr;
- const struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);
-
-#ifdef RTE_LIBRTE_DPAA_HWDEBUG
- DPAA_ASSERT(mr->pmode == qm_mr_pvb);
-#endif
- /* when accessing 'verb', use __raw_readb() to ensure that compiler
- * inlining doesn't try to optimise out "excess reads".
- */
- if ((__raw_readb(&res->ern.verb) & QM_MR_VERB_VBIT) == mr->vbit) {
- mr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);
- if (!mr->pi)
- mr->vbit ^= QM_MR_VERB_VBIT;
- mr->fill++;
- res = MR_INC(res);
- }
- dcbit_ro(res);
-}
-
struct qman_portal *
qman_init_portal(struct qman_portal *portal,
const struct qm_portal_config *c,
@@ -1825,6 +1827,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
}
out:
FQUNLOCK(fq);
+ /* Draining FQRNIs, if any */
+ drain_mr_fqrni(&p->p);
return rval;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.964983062 +0800
+++ 0083-bus-dpaa-fix-PFDRs-leaks-due-to-FQRNIs.patch 2024-11-11 14:23:05.212192838 +0800
@@ -1 +1 @@
-From b292acc3c4a8fd5104cfdfa5c6d3d0df95b6543b Mon Sep 17 00:00:00 2001
+From 508ee4007a756e0eea944d532e3edb31135b31d9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b292acc3c4a8fd5104cfdfa5c6d3d0df95b6543b ]
@@ -37 +39,0 @@
-Cc: stable@dpdk.org
@@ -46 +48 @@
-index 301057723e..9c90ee25a6 100644
+index 83db0a534e..f06992ca48 100644
@@ -49 +51 @@
-@@ -292,10 +292,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
+@@ -294,10 +294,32 @@ static inline void qman_stop_dequeues_ex(struct qman_portal *p)
@@ -82 +84 @@
-@@ -317,6 +339,7 @@ loop:
+@@ -319,6 +341,7 @@ loop:
@@ -90 +92 @@
-@@ -479,27 +502,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
+@@ -481,27 +504,6 @@ static inline int qm_mr_init(struct qm_portal *portal,
@@ -118 +120 @@
-@@ -1794,6 +1796,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
+@@ -1825,6 +1827,8 @@ int qman_retire_fq(struct qman_fq *fq, u32 *flags)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/dpaa: fix typecasting channel ID' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (82 preceding siblings ...)
2024-11-11 6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
` (36 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Rohit Raj; +Cc: xuemingl, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9a1711950ed89b16332f4cc6e378de7269815ff6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9a1711950ed89b16332f4cc6e378de7269815ff6 Mon Sep 17 00:00:00 2001
From: Rohit Raj <rohit.raj@nxp.com>
Date: Tue, 1 Oct 2024 16:33:09 +0530
Subject: [PATCH] net/dpaa: fix typecasting channel ID
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5edc61ee9a2c1e1d9c8b75faac4b61de7111c34e ]
Avoid typecasting ch_id to u32 and passing it to another API since it
can corrupt other data. Instead, create new u32 variable and typecast
it back to u16 after it gets updated by the API.
Fixes: 0c504f6950b6 ("net/dpaa: support push mode")
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_ethdev.c | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/drivers/net/dpaa/dpaa_ethdev.c b/drivers/net/dpaa/dpaa_ethdev.c
index bcb28f33ee..6fdbe80334 100644
--- a/drivers/net/dpaa/dpaa_ethdev.c
+++ b/drivers/net/dpaa/dpaa_ethdev.c
@@ -971,7 +971,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
struct fman_if *fif = dev->process_private;
struct qman_fq *rxq = &dpaa_intf->rx_queues[queue_idx];
struct qm_mcc_initfq opts = {0};
- u32 flags = 0;
+ u32 ch_id, flags = 0;
int ret;
u32 buffsz = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM;
uint32_t max_rx_pktlen;
@@ -1095,7 +1095,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
DPAA_IF_RX_CONTEXT_STASH;
/*Create a channel and associate given queue with the channel*/
- qman_alloc_pool_range((u32 *)&rxq->ch_id, 1, 1, 0);
+ qman_alloc_pool_range(&ch_id, 1, 1, 0);
+ rxq->ch_id = (u16)ch_id;
+
opts.we_mask = opts.we_mask | QM_INITFQ_WE_DESTWQ;
opts.fqd.dest.channel = rxq->ch_id;
opts.fqd.dest.wq = DPAA_IF_RX_PRIORITY;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:08.999944261 +0800
+++ 0084-net-dpaa-fix-typecasting-channel-ID.patch 2024-11-11 14:23:05.222192838 +0800
@@ -1 +1 @@
-From 5edc61ee9a2c1e1d9c8b75faac4b61de7111c34e Mon Sep 17 00:00:00 2001
+From 9a1711950ed89b16332f4cc6e378de7269815ff6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5edc61ee9a2c1e1d9c8b75faac4b61de7111c34e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 51f5422e0c..afeca4307e 100644
+index bcb28f33ee..6fdbe80334 100644
@@ -23 +25 @@
-@@ -972,7 +972,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+@@ -971,7 +971,7 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
@@ -32 +34 @@
-@@ -1096,7 +1096,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
+@@ -1095,7 +1095,9 @@ int dpaa_eth_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (83 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
` (35 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=747cfbc98b9ef13dcd7aeaecbf439f2a12cfbf85
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 747cfbc98b9ef13dcd7aeaecbf439f2a12cfbf85 Mon Sep 17 00:00:00 2001
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Date: Tue, 1 Oct 2024 16:33:10 +0530
Subject: [PATCH] bus/dpaa: fix VSP for 1G fm1-mac9 and 10
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 25434831ca958583fb79e1e8b06e83274c68fc93 ]
No need to classify interface separately for 1G and 10G
Note that VSP or Virtual storage profile are DPAA equivalent for SRIOV
config to logically divide a physical ports in virtual ports.
Fixes: e0718bb2ca95 ("bus/dpaa: add virtual storage profile port init")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman.c | 29 +++++++++++++++++++++++++++--
1 file changed, 27 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman.c b/drivers/bus/dpaa/base/fman/fman.c
index 1814372a40..8263d42bed 100644
--- a/drivers/bus/dpaa/base/fman/fman.c
+++ b/drivers/bus/dpaa/base/fman/fman.c
@@ -153,7 +153,7 @@ static void fman_if_vsp_init(struct __fman_if *__if)
size_t lenp;
const uint8_t mac_idx[] = {-1, 0, 1, 2, 3, 4, 5, 6, 7, 0, 1};
- if (__if->__if.mac_type == fman_mac_1g) {
+ if (__if->__if.mac_idx <= 8) {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-1g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
@@ -176,7 +176,32 @@ static void fman_if_vsp_init(struct __fman_if *__if)
}
}
}
- } else if (__if->__if.mac_type == fman_mac_10g) {
+
+ for_each_compatible_node(dev, NULL,
+ "fsl,fman-port-op-extended-args") {
+ prop = of_get_property(dev, "cell-index", &lenp);
+
+ if (prop) {
+ cell_index = of_read_number(&prop[0],
+ lenp / sizeof(phandle));
+
+ if (cell_index == __if->__if.mac_idx) {
+ prop = of_get_property(dev,
+ "vsp-window",
+ &lenp);
+
+ if (prop) {
+ __if->__if.num_profiles =
+ of_read_number(&prop[0],
+ 1);
+ __if->__if.base_profile_id =
+ of_read_number(&prop[1],
+ 1);
+ }
+ }
+ }
+ }
+ } else {
for_each_compatible_node(dev, NULL,
"fsl,fman-port-10g-rx-extended-args") {
prop = of_get_property(dev, "cell-index", &lenp);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.033905660 +0800
+++ 0085-bus-dpaa-fix-VSP-for-1G-fm1-mac9-and-10.patch 2024-11-11 14:23:05.222192838 +0800
@@ -1 +1 @@
-From 25434831ca958583fb79e1e8b06e83274c68fc93 Mon Sep 17 00:00:00 2001
+From 747cfbc98b9ef13dcd7aeaecbf439f2a12cfbf85 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 25434831ca958583fb79e1e8b06e83274c68fc93 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 41195eb0a7..beeb03dbf2 100644
+index 1814372a40..8263d42bed 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bus/dpaa: fix the fman details status' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (84 preceding siblings ...)
2024-11-11 6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
` (34 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=adfe2ce7037c09e6a0b37d1d1381775e6b619f96
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From adfe2ce7037c09e6a0b37d1d1381775e6b619f96 Mon Sep 17 00:00:00 2001
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Date: Tue, 1 Oct 2024 16:33:11 +0530
Subject: [PATCH] bus/dpaa: fix the fman details status
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a87a1d0f4e7667fa3d6b818f30aa5c062e567597 ]
Fix the incorrect placing of brackets to calculate stats.
This corrects the "(a | b) << 32" to "a | (b << 32)"
Fixes: e62a3f4183f1 ("bus/dpaa: fix statistics reading")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/fman/fman_hw.c | 9 +++++----
1 file changed, 5 insertions(+), 4 deletions(-)
diff --git a/drivers/bus/dpaa/base/fman/fman_hw.c b/drivers/bus/dpaa/base/fman/fman_hw.c
index 24a99f7235..97e792806f 100644
--- a/drivers/bus/dpaa/base/fman/fman_hw.c
+++ b/drivers/bus/dpaa/base/fman/fman_hw.c
@@ -243,10 +243,11 @@ fman_if_stats_get_all(struct fman_if *p, uint64_t *value, int n)
int i;
uint64_t base_offset = offsetof(struct memac_regs, reoct_l);
- for (i = 0; i < n; i++)
- value[i] = (((u64)in_be32((char *)regs + base_offset + 8 * i) |
- (u64)in_be32((char *)regs + base_offset +
- 8 * i + 4)) << 32);
+ for (i = 0; i < n; i++) {
+ uint64_t a = in_be32((char *)regs + base_offset + 8 * i);
+ uint64_t b = in_be32((char *)regs + base_offset + 8 * i + 4);
+ value[i] = a | b << 32;
+ }
}
void
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.065665660 +0800
+++ 0086-bus-dpaa-fix-the-fman-details-status.patch 2024-11-11 14:23:05.222192838 +0800
@@ -1 +1 @@
-From a87a1d0f4e7667fa3d6b818f30aa5c062e567597 Mon Sep 17 00:00:00 2001
+From adfe2ce7037c09e6a0b37d1d1381775e6b619f96 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a87a1d0f4e7667fa3d6b818f30aa5c062e567597 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/dpaa: fix reallocate mbuf handling' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (85 preceding siblings ...)
2024-11-11 6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
` (33 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Vanshika Shukla; +Cc: xuemingl, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b430ea372d74ca994443f2c8086311d1e3b1fcc8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b430ea372d74ca994443f2c8086311d1e3b1fcc8 Mon Sep 17 00:00:00 2001
From: Vanshika Shukla <vanshika.shukla@nxp.com>
Date: Tue, 1 Oct 2024 16:33:25 +0530
Subject: [PATCH] net/dpaa: fix reallocate mbuf handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7594cafa92189fd5bad87a5caa6b7a92bbab0979 ]
This patch fixes the bug in the reallocate_mbuf code
handling. The source location is corrected when copying
the data in the new mbuf.
Fixes: f8c7a17a48c9 ("net/dpaa: support Tx scatter gather for non-DPAA buffer")
Signed-off-by: Vanshika Shukla <vanshika.shukla@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa/dpaa_rxtx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/dpaa/dpaa_rxtx.c b/drivers/net/dpaa/dpaa_rxtx.c
index ce4f3d6c85..018d55bbdc 100644
--- a/drivers/net/dpaa/dpaa_rxtx.c
+++ b/drivers/net/dpaa/dpaa_rxtx.c
@@ -1034,7 +1034,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
/* Copy the data */
data = rte_pktmbuf_append(new_mbufs[0], bytes_to_copy);
- rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(mbuf,
+ rte_memcpy((uint8_t *)data, rte_pktmbuf_mtod_offset(temp_mbuf,
void *, offset1), bytes_to_copy);
/* Set new offsets and the temp buffers */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.097428559 +0800
+++ 0087-net-dpaa-fix-reallocate-mbuf-handling.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 7594cafa92189fd5bad87a5caa6b7a92bbab0979 Mon Sep 17 00:00:00 2001
+From b430ea372d74ca994443f2c8086311d1e3b1fcc8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7594cafa92189fd5bad87a5caa6b7a92bbab0979 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 1d7efdef88..247e7b92ba 100644
+index ce4f3d6c85..018d55bbdc 100644
@@ -23 +25 @@
-@@ -1223,7 +1223,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
+@@ -1034,7 +1034,7 @@ reallocate_mbuf(struct qman_fq *txq, struct rte_mbuf *mbuf)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (86 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
` (32 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Rushil Gupta, Praveen Kaligineedi, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4c28c4f7677e0841e37ab4cf4e6db264e41ad8df
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4c28c4f7677e0841e37ab4cf4e6db264e41ad8df Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Tue, 1 Oct 2024 16:48:52 -0700
Subject: [PATCH] net/gve: fix mbuf allocation memory leak for DQ Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 265daac8a53aaaad89f562c201bc6c269d7817fc ]
Currently, gve_rxq_mbufs_alloc_dqo() allocates RING_SIZE buffers, but
only posts RING_SIZE - 1 of them, inevitably leaking a buffer every
time queues are stopped/started. This could eventually lead to running
out of mbufs if an application stops/starts traffic enough.
Fixes: b044845bb015 ("net/gve: support queue start/stop")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
---
.mailmap | 1 +
drivers/net/gve/gve_rx_dqo.c | 16 +++++++++-------
2 files changed, 10 insertions(+), 7 deletions(-)
diff --git a/.mailmap b/.mailmap
index 674384cfc5..c26a1acf7a 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1141,6 +1141,7 @@ Pradeep Satyanarayana <pradeep@us.ibm.com>
Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
Prashant Upadhyaya <prashant.upadhyaya@aricent.com> <praupadhyaya@gmail.com>
Prateek Agarwal <prateekag@cse.iitb.ac.in>
+Praveen Kaligineedi <pkaligineedi@google.com>
Praveen Shetty <praveen.shetty@intel.com>
Pravin Pathak <pravin.pathak@intel.com>
Prince Takkar <ptakkar@marvell.com>
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index a56cdbf11b..855c06dc11 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -335,34 +335,36 @@ static int
gve_rxq_mbufs_alloc_dqo(struct gve_rx_queue *rxq)
{
struct rte_mbuf *nmb;
+ uint16_t rx_mask;
uint16_t i;
int diag;
- diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], rxq->nb_rx_desc);
+ rx_mask = rxq->nb_rx_desc - 1;
+ diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0],
+ rx_mask);
if (diag < 0) {
rxq->stats.no_mbufs_bulk++;
- for (i = 0; i < rxq->nb_rx_desc - 1; i++) {
+ for (i = 0; i < rx_mask; i++) {
nmb = rte_pktmbuf_alloc(rxq->mpool);
if (!nmb)
break;
rxq->sw_ring[i] = nmb;
}
if (i < rxq->nb_rx_desc - 1) {
- rxq->stats.no_mbufs += rxq->nb_rx_desc - 1 - i;
+ rxq->stats.no_mbufs += rx_mask - i;
return -ENOMEM;
}
}
- for (i = 0; i < rxq->nb_rx_desc; i++) {
- if (i == rxq->nb_rx_desc - 1)
- break;
+ for (i = 0; i < rx_mask; i++) {
nmb = rxq->sw_ring[i];
rxq->rx_ring[i].buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb));
rxq->rx_ring[i].buf_id = rte_cpu_to_le_16(i);
}
+ rxq->rx_ring[rx_mask].buf_id = rte_cpu_to_le_16(rx_mask);
rxq->nb_rx_hold = 0;
- rxq->bufq_tail = rxq->nb_rx_desc - 1;
+ rxq->bufq_tail = rx_mask;
rte_write32(rxq->bufq_tail, rxq->qrx_tail);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.130822358 +0800
+++ 0088-net-gve-fix-mbuf-allocation-memory-leak-for-DQ-Rx.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 265daac8a53aaaad89f562c201bc6c269d7817fc Mon Sep 17 00:00:00 2001
+From 4c28c4f7677e0841e37ab4cf4e6db264e41ad8df Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 265daac8a53aaaad89f562c201bc6c269d7817fc ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 6e72362ebc..7b3a20af68 100644
+index 674384cfc5..c26a1acf7a 100644
@@ -26 +28,2 @@
-@@ -1193,6 +1193,7 @@ Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
+@@ -1141,6 +1141,7 @@ Pradeep Satyanarayana <pradeep@us.ibm.com>
+ Prashant Bhole <bhole_prashant_q7@lab.ntt.co.jp>
@@ -29 +31,0 @@
- Prathisna Padmasanan <prathisna.padmasanan@intel.com>
@@ -35 +37 @@
-index d8e9eee4a8..81a68f0c7e 100644
+index a56cdbf11b..855c06dc11 100644
@@ -38 +40 @@
-@@ -395,34 +395,36 @@ static int
+@@ -335,34 +335,36 @@ static int
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve: always attempt Rx refill on DQ' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (87 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
` (31 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Praveen Kaligineedi, Rushil Gupta, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=efbc64f353914ae5763f5b201e1d597d6c7b5044
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From efbc64f353914ae5763f5b201e1d597d6c7b5044 Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Tue, 1 Oct 2024 16:45:33 -0700
Subject: [PATCH] net/gve: always attempt Rx refill on DQ
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 31d2149719b716dfc8a30f2fc4fe4bd2e02f7a50 ]
Before this patch, gve_rx_refill_dqo() is only called if the number of
packets received in a cycle is non-zero. However, in a
memory-constrained scenario, this doesn't behave well, as this could be
a potential source of lockup, if there is no memory and all buffers have
been received before memory is freed up for the driver to use.
This patch moves the gve_rx_refill_dqo() call to occur regardless of
whether packets have been received so that in the case that enough
memory is freed, the driver can recover.
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
drivers/net/gve/gve_rx_dqo.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 855c06dc11..0203d23b9a 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -135,14 +135,12 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (nb_rx > 0) {
rxq->rx_tail = rx_id;
- if (rx_id_bufq != rxq->next_avail)
- rxq->next_avail = rx_id_bufq;
-
- gve_rx_refill_dqo(rxq);
+ rxq->next_avail = rx_id_bufq;
rxq->stats.packets += nb_rx;
rxq->stats.bytes += bytes;
}
+ gve_rx_refill_dqo(rxq);
return nb_rx;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.176827557 +0800
+++ 0089-net-gve-always-attempt-Rx-refill-on-DQ.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 31d2149719b716dfc8a30f2fc4fe4bd2e02f7a50 Mon Sep 17 00:00:00 2001
+From efbc64f353914ae5763f5b201e1d597d6c7b5044 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 31d2149719b716dfc8a30f2fc4fe4bd2e02f7a50 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 81a68f0c7e..e4084bc0dd 100644
+index 855c06dc11..0203d23b9a 100644
@@ -30 +32 @@
-@@ -195,14 +195,12 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+@@ -135,14 +135,12 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: fix type declaration of some variables' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (88 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
` (30 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Qin Ke; +Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=33d19bdd507223b7e7f2b30332642338e3f0f058
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 33d19bdd507223b7e7f2b30332642338e3f0f058 Mon Sep 17 00:00:00 2001
From: Qin Ke <qin.ke@corigine.com>
Date: Thu, 5 Sep 2024 14:25:04 +0800
Subject: [PATCH] net/nfp: fix type declaration of some variables
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 93ebb1e57e3ff5ce34168058d73a82ae206255cc ]
The type declaration of variable 'speed' and 'i' in
'nfp_net_link_speed_rte2nfp()' is not correct, fix it.
Fixes: 36a9abd4b679 ("net/nfp: write link speed to control BAR")
Signed-off-by: Qin Ke <qin.ke@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/nfp_net_common.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 0491912bd3..08cb5f7d3b 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -153,10 +153,10 @@ static const uint32_t nfp_net_link_speed_nfp2rte[] = {
[NFP_NET_CFG_STS_LINK_RATE_100G] = RTE_ETH_SPEED_NUM_100G,
};
-static uint16_t
-nfp_net_link_speed_rte2nfp(uint16_t speed)
+static size_t
+nfp_net_link_speed_rte2nfp(uint32_t speed)
{
- uint16_t i;
+ size_t i;
for (i = 0; i < RTE_DIM(nfp_net_link_speed_nfp2rte); i++) {
if (speed == nfp_net_link_speed_nfp2rte[i])
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.219265057 +0800
+++ 0090-net-nfp-fix-type-declaration-of-some-variables.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 93ebb1e57e3ff5ce34168058d73a82ae206255cc Mon Sep 17 00:00:00 2001
+From 33d19bdd507223b7e7f2b30332642338e3f0f058 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 93ebb1e57e3ff5ce34168058d73a82ae206255cc ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index d93f8157b9..780e9829d0 100644
+index 0491912bd3..08cb5f7d3b 100644
@@ -24 +26 @@
-@@ -157,10 +157,10 @@ static const uint32_t nfp_net_link_speed_nfp2rte[] = {
+@@ -153,10 +153,10 @@ static const uint32_t nfp_net_link_speed_nfp2rte[] = {
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: fix representor port link status update' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (89 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
` (29 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Qin Ke; +Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d2484a7f388c787198637ea4e39a2722ab231f15
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d2484a7f388c787198637ea4e39a2722ab231f15 Mon Sep 17 00:00:00 2001
From: Qin Ke <qin.ke@corigine.com>
Date: Thu, 5 Sep 2024 14:25:11 +0800
Subject: [PATCH] net/nfp: fix representor port link status update
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d95cf21d2ed6630d21b5b1ca4abc40155720cd3f ]
The link status of representor port is reported by the flower
firmware through control message and it already parsed and
stored in the 'link' field of representor port structure.
The original logic read link status from the control BAR again,
and use it rather then the 'link' field of the representor port
structure in the following logic wrongly.
Fix this by delete the read control BAR statement and use the
right link status value.
Fixes: c4de52eca76c ("net/nfp: remove redundancy for representor port")
Signed-off-by: Qin Ke <qin.ke@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_representor.c | 5 +----
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 88fb6975af..23709acbba 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -23,7 +23,6 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
__rte_unused int wait_to_complete)
{
int ret;
- uint32_t nn_link_status;
struct nfp_net_hw *pf_hw;
struct rte_eth_link *link;
struct nfp_flower_representor *repr;
@@ -32,9 +31,7 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
link = &repr->link;
pf_hw = repr->app_fw_flower->pf_hw;
- nn_link_status = nn_cfg_readw(&pf_hw->super, NFP_NET_CFG_STS);
-
- ret = nfp_net_link_update_common(dev, pf_hw, link, nn_link_status);
+ ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
return ret;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.255512556 +0800
+++ 0091-net-nfp-fix-representor-port-link-status-update.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From d95cf21d2ed6630d21b5b1ca4abc40155720cd3f Mon Sep 17 00:00:00 2001
+From d2484a7f388c787198637ea4e39a2722ab231f15 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d95cf21d2ed6630d21b5b1ca4abc40155720cd3f ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -25,2 +27,2 @@
- drivers/net/nfp/flower/nfp_flower_representor.c | 7 +------
- 1 file changed, 1 insertion(+), 6 deletions(-)
+ drivers/net/nfp/flower/nfp_flower_representor.c | 5 +----
+ 1 file changed, 1 insertion(+), 4 deletions(-)
@@ -29 +31 @@
-index 054ea1a938..5db7d50618 100644
+index 88fb6975af..23709acbba 100644
@@ -32 +34 @@
-@@ -29,18 +29,13 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
+@@ -23,7 +23,6 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
@@ -37 +39 @@
-- struct nfp_net_hw *pf_hw;
+ struct nfp_net_hw *pf_hw;
@@ -40,2 +42 @@
-
- repr = dev->data->dev_private;
+@@ -32,9 +31,7 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
@@ -44 +45 @@
-- pf_hw = repr->app_fw_flower->pf_hw;
+ pf_hw = repr->app_fw_flower->pf_hw;
@@ -47,2 +48,2 @@
-- ret = nfp_net_link_update_common(dev, link, nn_link_status);
-+ ret = nfp_net_link_update_common(dev, link, link->link_status);
+- ret = nfp_net_link_update_common(dev, pf_hw, link, nn_link_status);
++ ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve: fix refill logic causing memory corruption' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (90 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
` (28 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Rushil Gupta, Praveen Kaligineedi, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7907e4749624ac43a40a71bc200faa46d2e219dc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7907e4749624ac43a40a71bc200faa46d2e219dc Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Thu, 3 Oct 2024 18:05:18 -0700
Subject: [PATCH] net/gve: fix refill logic causing memory corruption
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 52c9b4069b216495d6e709bb500b6a52b8b2ca82 ]
There is a seemingly mundane error in the RX refill path which can lead
to major issues and ultimately program crashing.
This error occurs as part of an edge case where the exact number of
buffers the refill causes the ring to wrap around to 0. The current
refill logic is split into two conditions: first, when the number of
buffers to refill is greater than the number of buffers left in the ring
before wraparound occurs; second, when the opposite is true, and there
are enough buffers before wraparound to refill all buffers.
In this edge case, the first condition erroneously uses a (<) condition
to decide whether to wrap around, when it should have been (<=). In that
case, the second condition would run and the tail pointer would be set
to an invalid value (RING_SIZE). This causes a number of cascading
failures.
1. The first issue rather mundane in that rxq->bufq_tail == RING_SIZE at
the end of the refill, this will correct itself on the next refill
without any sort of memory leak or corruption;
2. The second failure is that the head pointer would end up overrunning
the tail because the last buffer that is refilled is refilled at
sw_ring[RING_SIZE] instead of sw_ring[0]. This would cause the driver
to give the application a stale mbuf, one that has been potentially
freed or is otherwise stale;
3. The third failure comes from the fact that the software ring is being
overrun. Because we directly use the sw_ring pointer to refill
buffers, when sw_ring[RING_SIZE] is filled, a buffer overflow occurs.
The overwritten data has the potential to be important data, and this
can potentially cause the program to crash outright.
This patch fixes the refill bug while greatly simplifying the logic so
that it is much less error-prone.
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
---
drivers/net/gve/gve_rx_dqo.c | 62 ++++++++++--------------------------
1 file changed, 16 insertions(+), 46 deletions(-)
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index 0203d23b9a..f55a03f8c4 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -10,66 +10,36 @@
static inline void
gve_rx_refill_dqo(struct gve_rx_queue *rxq)
{
- volatile struct gve_rx_desc_dqo *rx_buf_ring;
volatile struct gve_rx_desc_dqo *rx_buf_desc;
struct rte_mbuf *nmb[rxq->nb_rx_hold];
uint16_t nb_refill = rxq->nb_rx_hold;
- uint16_t nb_desc = rxq->nb_rx_desc;
uint16_t next_avail = rxq->bufq_tail;
struct rte_eth_dev *dev;
uint64_t dma_addr;
- uint16_t delta;
int i;
if (rxq->nb_rx_hold < rxq->free_thresh)
return;
- rx_buf_ring = rxq->rx_ring;
- delta = nb_desc - next_avail;
- if (unlikely(delta < nb_refill)) {
- if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, delta) == 0)) {
- for (i = 0; i < delta; i++) {
- rx_buf_desc = &rx_buf_ring[next_avail + i];
- rxq->sw_ring[next_avail + i] = nmb[i];
- dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
- rx_buf_desc->header_buf_addr = 0;
- rx_buf_desc->buf_addr = dma_addr;
- }
- nb_refill -= delta;
- next_avail = 0;
- rxq->nb_rx_hold -= delta;
- } else {
- rxq->stats.no_mbufs_bulk++;
- rxq->stats.no_mbufs += nb_desc - next_avail;
- dev = &rte_eth_devices[rxq->port_id];
- dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
- PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
- rxq->port_id, rxq->queue_id);
- return;
- }
+ if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill))) {
+ rxq->stats.no_mbufs_bulk++;
+ rxq->stats.no_mbufs += nb_refill;
+ dev = &rte_eth_devices[rxq->port_id];
+ dev->data->rx_mbuf_alloc_failed += nb_refill;
+ PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
+ rxq->port_id, rxq->queue_id);
+ return;
}
- if (nb_desc - next_avail >= nb_refill) {
- if (likely(rte_pktmbuf_alloc_bulk(rxq->mpool, nmb, nb_refill) == 0)) {
- for (i = 0; i < nb_refill; i++) {
- rx_buf_desc = &rx_buf_ring[next_avail + i];
- rxq->sw_ring[next_avail + i] = nmb[i];
- dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
- rx_buf_desc->header_buf_addr = 0;
- rx_buf_desc->buf_addr = dma_addr;
- }
- next_avail += nb_refill;
- rxq->nb_rx_hold -= nb_refill;
- } else {
- rxq->stats.no_mbufs_bulk++;
- rxq->stats.no_mbufs += nb_desc - next_avail;
- dev = &rte_eth_devices[rxq->port_id];
- dev->data->rx_mbuf_alloc_failed += nb_desc - next_avail;
- PMD_DRV_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%u",
- rxq->port_id, rxq->queue_id);
- }
+ for (i = 0; i < nb_refill; i++) {
+ rx_buf_desc = &rxq->rx_ring[next_avail];
+ rxq->sw_ring[next_avail] = nmb[i];
+ dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb[i]));
+ rx_buf_desc->header_buf_addr = 0;
+ rx_buf_desc->buf_addr = dma_addr;
+ next_avail = (next_avail + 1) & (rxq->nb_rx_desc - 1);
}
-
+ rxq->nb_rx_hold -= nb_refill;
rte_write32(next_avail, rxq->qrx_tail);
rxq->bufq_tail = next_avail;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.294538355 +0800
+++ 0092-net-gve-fix-refill-logic-causing-memory-corruption.patch 2024-11-11 14:23:05.232192837 +0800
@@ -1 +1 @@
-From 52c9b4069b216495d6e709bb500b6a52b8b2ca82 Mon Sep 17 00:00:00 2001
+From 7907e4749624ac43a40a71bc200faa46d2e219dc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 52c9b4069b216495d6e709bb500b6a52b8b2ca82 ]
@@ -40 +42,0 @@
-Cc: stable@dpdk.org
@@ -50 +52 @@
-index e4084bc0dd..5371bab77d 100644
+index 0203d23b9a..f55a03f8c4 100644
@@ -53 +55 @@
-@@ -11,66 +11,36 @@
+@@ -10,66 +10,36 @@
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve: add IO memory barriers before reading descriptors' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (91 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
` (27 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington
Cc: xuemingl, Praveen Kaligineedi, Rushil Gupta, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1c6a6173878598d07e8fbfef9fe64114a89cb003
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1c6a6173878598d07e8fbfef9fe64114a89cb003 Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Thu, 3 Oct 2024 18:05:35 -0700
Subject: [PATCH] net/gve: add IO memory barriers before reading descriptors
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f8fee84eb48cdf13a7a29f5851a2e2a41045813a ]
Without memory barriers, there is no guarantee that the CPU will
actually wait until after the descriptor has been fully written before
loading descriptor data. In this case, it is possible that stale data is
read and acted on by the driver when processing TX or RX completions.
This change adds read memory barriers just after the generation bit is
read in both the RX and the TX path to ensure that the NIC has properly
passed ownership to the driver before descriptor data is read in full.
Note that memory barriers should not be needed after writing the RX
buffer queue/TX descriptor queue tails because rte_write32 includes an
implicit write memory barrier.
Fixes: 4022f9999f56 ("net/gve: support basic Tx data path for DQO")
Fixes: 45da16b5b181 ("net/gve: support basic Rx data path for DQO")
Signed-off-by: Joshua Washington <joshwash@google.com>
Reviewed-by: Praveen Kaligineedi <pkaligineedi@google.com>
Reviewed-by: Rushil Gupta <rushilg@google.com>
---
drivers/net/gve/gve_rx_dqo.c | 2 ++
drivers/net/gve/gve_tx_dqo.c | 2 ++
2 files changed, 4 insertions(+)
diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c
index f55a03f8c4..3f694a4d9a 100644
--- a/drivers/net/gve/gve_rx_dqo.c
+++ b/drivers/net/gve/gve_rx_dqo.c
@@ -72,6 +72,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
if (rx_desc->generation != rxq->cur_gen_bit)
break;
+ rte_io_rmb();
+
if (unlikely(rx_desc->rx_error)) {
rxq->stats.errors++;
continue;
diff --git a/drivers/net/gve/gve_tx_dqo.c b/drivers/net/gve/gve_tx_dqo.c
index b9d6d01749..ce3681b6c6 100644
--- a/drivers/net/gve/gve_tx_dqo.c
+++ b/drivers/net/gve/gve_tx_dqo.c
@@ -24,6 +24,8 @@ gve_tx_clean_dqo(struct gve_tx_queue *txq)
if (compl_desc->generation != txq->cur_gen_bit)
return;
+ rte_io_rmb();
+
compl_tag = rte_le_to_cpu_16(compl_desc->completion_tag);
aim_txq = txq->txqs[compl_desc->id];
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.326330154 +0800
+++ 0093-net-gve-add-IO-memory-barriers-before-reading-descri.patch 2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From f8fee84eb48cdf13a7a29f5851a2e2a41045813a Mon Sep 17 00:00:00 2001
+From 1c6a6173878598d07e8fbfef9fe64114a89cb003 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f8fee84eb48cdf13a7a29f5851a2e2a41045813a ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
@@ -32 +34 @@
-index 5371bab77d..285c6ddd61 100644
+index f55a03f8c4..3f694a4d9a 100644
@@ -35 +37 @@
-@@ -132,6 +132,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
+@@ -72,6 +72,8 @@ gve_rx_burst_dqo(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
@@ -45 +47 @@
-index 731c287224..6984f92443 100644
+index b9d6d01749..ce3681b6c6 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/memif: fix buffer overflow in zero copy Rx' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (92 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
` (26 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Mihai Brodschi; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3061d87b232c715422e2fe93017afc85f528fc40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3061d87b232c715422e2fe93017afc85f528fc40 Mon Sep 17 00:00:00 2001
From: Mihai Brodschi <mihai.brodschi@broadcom.com>
Date: Sat, 29 Jun 2024 00:01:29 +0300
Subject: [PATCH] net/memif: fix buffer overflow in zero copy Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b92b18b76858ed58ebe9c5dea9dedf9a99e7e0e2 ]
rte_pktmbuf_alloc_bulk is called by the zero-copy receiver to allocate
new mbufs to be provided to the sender. The allocated mbuf pointers
are stored in a ring, but the alloc function doesn't implement index
wrap-around, so it writes past the end of the array. This results in
memory corruption and duplicate mbufs being received.
Allocate 2x the space for the mbuf ring, so that the alloc function
has a contiguous array to write to, then copy the excess entries
to the start of the array.
Fixes: 43b815d88188 ("net/memif: support zero-copy slave")
Signed-off-by: Mihai Brodschi <mihai.brodschi@broadcom.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
.mailmap | 1 +
drivers/net/memif/rte_eth_memif.c | 10 +++++++++-
2 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index c26a1acf7a..8d7fa55d9e 100644
--- a/.mailmap
+++ b/.mailmap
@@ -971,6 +971,7 @@ Michal Swiatkowski <michal.swiatkowski@intel.com>
Michal Wilczynski <michal.wilczynski@intel.com>
Michel Machado <michel@digirati.com.br>
Miguel Bernal Marin <miguel.bernal.marin@linux.intel.com>
+Mihai Brodschi <mihai.brodschi@broadcom.com>
Mihai Pogonaru <pogonarumihai@gmail.com>
Mike Baucom <michael.baucom@broadcom.com>
Mike Pattrick <mkp@redhat.com>
diff --git a/drivers/net/memif/rte_eth_memif.c b/drivers/net/memif/rte_eth_memif.c
index f05f4c24df..1eb41bb471 100644
--- a/drivers/net/memif/rte_eth_memif.c
+++ b/drivers/net/memif/rte_eth_memif.c
@@ -600,6 +600,10 @@ refill:
ret = rte_pktmbuf_alloc_bulk(mq->mempool, &mq->buffers[head & mask], n_slots);
if (unlikely(ret < 0))
goto no_free_mbufs;
+ if (unlikely(n_slots > ring_size - (head & mask))) {
+ rte_memcpy(mq->buffers, &mq->buffers[ring_size],
+ (n_slots + (head & mask) - ring_size) * sizeof(struct rte_mbuf *));
+ }
while (n_slots--) {
s0 = head++ & mask;
@@ -1245,8 +1249,12 @@ memif_init_queues(struct rte_eth_dev *dev)
}
mq->buffers = NULL;
if (pmd->flags & ETH_MEMIF_FLAG_ZERO_COPY) {
+ /*
+ * Allocate 2x ring_size to reserve a contiguous array for
+ * rte_pktmbuf_alloc_bulk (to store allocated mbufs).
+ */
mq->buffers = rte_zmalloc("bufs", sizeof(struct rte_mbuf *) *
- (1 << mq->log2_ring_size), 0);
+ (1 << (mq->log2_ring_size + 1)), 0);
if (mq->buffers == NULL)
return -ENOMEM;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.360403854 +0800
+++ 0094-net-memif-fix-buffer-overflow-in-zero-copy-Rx.patch 2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From b92b18b76858ed58ebe9c5dea9dedf9a99e7e0e2 Mon Sep 17 00:00:00 2001
+From 3061d87b232c715422e2fe93017afc85f528fc40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b92b18b76858ed58ebe9c5dea9dedf9a99e7e0e2 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index 7b3a20af68..2e909c48a8 100644
+index c26a1acf7a..8d7fa55d9e 100644
@@ -30 +32,2 @@
-@@ -1011,6 +1011,7 @@ Michal Wilczynski <michal.wilczynski@intel.com>
+@@ -971,6 +971,7 @@ Michal Swiatkowski <michal.swiatkowski@intel.com>
+ Michal Wilczynski <michal.wilczynski@intel.com>
@@ -32 +34,0 @@
- Midde Ajijur Rehaman <ajijurx.rehaman.midde@intel.com>
@@ -39 +41 @@
-index e220ffaf92..cd722f254f 100644
+index f05f4c24df..1eb41bb471 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/tap: restrict maximum number of MP FDs' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (93 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
` (25 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7c8fbea353f3f8fe7d076d22c5a5f5aab4e7f683
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7c8fbea353f3f8fe7d076d22c5a5f5aab4e7f683 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 11 Oct 2024 10:29:23 -0700
Subject: [PATCH] net/tap: restrict maximum number of MP FDs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 288649a11a8a332727f2a988c676ff7dfd1bc4c5 ]
Now that max MP fds has increased to 253 it is possible that
the number of queues the TAP device can handle is less than that.
Therefore the code to handle MP message should only allow the
number of queues it can handle.
Coverity issue: 445386
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/tap/rte_eth_tap.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/tap/rte_eth_tap.c b/drivers/net/tap/rte_eth_tap.c
index 3fa03cdbee..93bba3cec1 100644
--- a/drivers/net/tap/rte_eth_tap.c
+++ b/drivers/net/tap/rte_eth_tap.c
@@ -2393,9 +2393,10 @@ tap_mp_sync_queues(const struct rte_mp_msg *request, const void *peer)
/* Fill file descriptors for all queues */
reply.num_fds = 0;
reply_param->rxq_count = 0;
- if (dev->data->nb_rx_queues + dev->data->nb_tx_queues >
- RTE_MP_MAX_FD_NUM){
- TAP_LOG(ERR, "Number of rx/tx queues exceeds max number of fds");
+
+ if (dev->data->nb_rx_queues > RTE_PMD_TAP_MAX_QUEUES) {
+ TAP_LOG(ERR, "Number of rx/tx queues %u exceeds max number of fds %u",
+ dev->data->nb_rx_queues, RTE_PMD_TAP_MAX_QUEUES);
return -1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.395312653 +0800
+++ 0095-net-tap-restrict-maximum-number-of-MP-FDs.patch 2024-11-11 14:23:05.242192837 +0800
@@ -1 +1 @@
-From 288649a11a8a332727f2a988c676ff7dfd1bc4c5 Mon Sep 17 00:00:00 2001
+From 7c8fbea353f3f8fe7d076d22c5a5f5aab4e7f683 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 288649a11a8a332727f2a988c676ff7dfd1bc4c5 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -17,2 +19,2 @@
- drivers/net/tap/rte_eth_tap.c | 5 +++--
- 1 file changed, 3 insertions(+), 2 deletions(-)
+ drivers/net/tap/rte_eth_tap.c | 7 ++++---
+ 1 file changed, 4 insertions(+), 3 deletions(-)
@@ -21 +23 @@
-index 5ad3bbadd1..c486c6f073 100644
+index 3fa03cdbee..93bba3cec1 100644
@@ -24,5 +26,7 @@
-@@ -2391,9 +2391,10 @@ tap_mp_sync_queues(const struct rte_mp_msg *request, const void *peer)
- reply_param->q_count = 0;
-
- RTE_ASSERT(dev->data->nb_rx_queues == dev->data->nb_tx_queues);
-- if (dev->data->nb_rx_queues > RTE_MP_MAX_FD_NUM) {
+@@ -2393,9 +2393,10 @@ tap_mp_sync_queues(const struct rte_mp_msg *request, const void *peer)
+ /* Fill file descriptors for all queues */
+ reply.num_fds = 0;
+ reply_param->rxq_count = 0;
+- if (dev->data->nb_rx_queues + dev->data->nb_tx_queues >
+- RTE_MP_MAX_FD_NUM){
+- TAP_LOG(ERR, "Number of rx/tx queues exceeds max number of fds");
@@ -31,2 +35 @@
- TAP_LOG(ERR, "Number of rx/tx queues %u exceeds max number of fds %u",
-- dev->data->nb_rx_queues, RTE_MP_MAX_FD_NUM);
++ TAP_LOG(ERR, "Number of rx/tx queues %u exceeds max number of fds %u",
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'ethdev: verify queue ID in Tx done cleanup' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (94 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
` (24 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d745b41ff05ca149ab263e21423ceb87bf017d45
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d745b41ff05ca149ab263e21423ceb87bf017d45 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 12 Oct 2024 17:14:37 +0800
Subject: [PATCH] ethdev: verify queue ID in Tx done cleanup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 707f50cef003a89f8fc5170c2ca5aea808cf4297 ]
Verify queue_id for rte_eth_tx_done_cleanup API.
Fixes: 44a718c457b5 ("ethdev: add API to free consumed buffers in Tx ring")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
lib/ethdev/rte_ethdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index 1f067873a9..dfcdf76fee 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -2823,6 +2823,12 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
+#ifdef RTE_ETHDEV_DEBUG_TX
+ ret = eth_dev_validate_tx_queue(dev, queue_id);
+ if (ret != 0)
+ return ret;
+#endif
+
if (*dev->dev_ops->tx_done_cleanup == NULL)
return -ENOTSUP;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.437338552 +0800
+++ 0096-ethdev-verify-queue-ID-in-Tx-done-cleanup.patch 2024-11-11 14:23:05.252192837 +0800
@@ -1 +1 @@
-From 707f50cef003a89f8fc5170c2ca5aea808cf4297 Mon Sep 17 00:00:00 2001
+From d745b41ff05ca149ab263e21423ceb87bf017d45 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 707f50cef003a89f8fc5170c2ca5aea808cf4297 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 12f42f1d68..6413c54e3b 100644
+index 1f067873a9..dfcdf76fee 100644
@@ -21 +23 @@
-@@ -2910,6 +2910,12 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
+@@ -2823,6 +2823,12 @@ rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: verify reset type from firmware' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (95 preceding siblings ...)
2024-11-11 6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
` (23 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, Jie Hai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=689c302c20ced757d4a0790c984aefc73c37daec
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 689c302c20ced757d4a0790c984aefc73c37daec Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 12 Oct 2024 17:14:57 +0800
Subject: [PATCH] net/hns3: verify reset type from firmware
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3db846003734d38d59950ebe024ad6d61afe08f0 ]
Verify reset-type which get from firmware.
Fixes: 1c1eb759e9d7 ("net/hns3: support RAS process in Kunpeng 930")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_intr.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/hns3/hns3_intr.c b/drivers/net/hns3/hns3_intr.c
index 0b768ef140..3f6b9e7fc4 100644
--- a/drivers/net/hns3/hns3_intr.c
+++ b/drivers/net/hns3/hns3_intr.c
@@ -2252,6 +2252,12 @@ hns3_handle_module_error_data(struct hns3_hw *hw, uint32_t *buf,
sum_err_info = (struct hns3_sum_err_info *)&buf[offset++];
mod_num = sum_err_info->mod_num;
reset_type = sum_err_info->reset_type;
+
+ if (reset_type >= HNS3_MAX_RESET) {
+ hns3_err(hw, "invalid reset type = %u", reset_type);
+ return;
+ }
+
if (reset_type && reset_type != HNS3_NONE_RESET)
hns3_atomic_set_bit(reset_type, &hw->reset.request);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.495799151 +0800
+++ 0097-net-hns3-verify-reset-type-from-firmware.patch 2024-11-11 14:23:05.252192837 +0800
@@ -1 +1 @@
-From 3db846003734d38d59950ebe024ad6d61afe08f0 Mon Sep 17 00:00:00 2001
+From 689c302c20ced757d4a0790c984aefc73c37daec Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3db846003734d38d59950ebe024ad6d61afe08f0 ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index f7162ee7bc..2de2b86b02 100644
+index 0b768ef140..3f6b9e7fc4 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: fix link change return value' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (96 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
` (22 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chaoyong He; +Cc: xuemingl, Long Wu, Peng Zhang, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=882c4f80f60bf33988eb31bf85d6104e7461be6d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 882c4f80f60bf33988eb31bf85d6104e7461be6d Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Sat, 12 Oct 2024 10:41:02 +0800
Subject: [PATCH] net/nfp: fix link change return value
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0ca4f216b89162ce8142d665a98924bdf4a23a6e ]
The return value of 'nfp_eth_set_configured()' is three ways, the
original logic considered it as two ways wrongly.
Fixes: 61d4008fe6bb ("net/nfp: support setting link up/down")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/nfp/nfp_ethdev.c | 18 ++++++++++++++----
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c
index 7495b01f16..e704c90dc5 100644
--- a/drivers/net/nfp/nfp_ethdev.c
+++ b/drivers/net/nfp/nfp_ethdev.c
@@ -201,30 +201,40 @@ error:
static int
nfp_net_set_link_up(struct rte_eth_dev *dev)
{
+ int ret;
struct nfp_net_hw *hw;
hw = dev->data->dev_private;
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
/* Configure the physical port down */
- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
+ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
else
- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
+ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
+ if (ret < 0)
+ return ret;
+
+ return 0;
}
/* Set the link down. */
static int
nfp_net_set_link_down(struct rte_eth_dev *dev)
{
+ int ret;
struct nfp_net_hw *hw;
hw = dev->data->dev_private;
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
/* Configure the physical port down */
- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
+ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
else
- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
+ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
+ if (ret < 0)
+ return ret;
+
+ return 0;
}
static uint8_t
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.547676450 +0800
+++ 0098-net-nfp-fix-link-change-return-value.patch 2024-11-11 14:23:05.252192837 +0800
@@ -1 +1 @@
-From 0ca4f216b89162ce8142d665a98924bdf4a23a6e Mon Sep 17 00:00:00 2001
+From 882c4f80f60bf33988eb31bf85d6104e7461be6d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0ca4f216b89162ce8142d665a98924bdf4a23a6e ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -17,2 +19,2 @@
- drivers/net/nfp/nfp_ethdev.c | 14 ++++++++++++--
- 1 file changed, 12 insertions(+), 2 deletions(-)
+ drivers/net/nfp/nfp_ethdev.c | 18 ++++++++++++++----
+ 1 file changed, 14 insertions(+), 4 deletions(-)
@@ -21 +23 @@
-index 4b31785b9f..ef1c2a94b7 100644
+index 7495b01f16..e704c90dc5 100644
@@ -24 +26 @@
-@@ -527,26 +527,36 @@ error:
+@@ -201,30 +201,40 @@ error:
@@ -30 +31,0 @@
- struct nfp_net_hw_priv *hw_priv;
@@ -33 +33,0 @@
- hw_priv = dev->process_private;
@@ -35,2 +35,7 @@
-- return nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 1);
-+ ret = nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 1);
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ /* Configure the physical port down */
+- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
++ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);
+ else
+- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
++ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);
@@ -49 +53,0 @@
- struct nfp_net_hw_priv *hw_priv;
@@ -52 +55,0 @@
- hw_priv = dev->process_private;
@@ -54,2 +57,7 @@
-- return nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 0);
-+ ret = nfp_eth_set_configured(hw_priv->pf_dev->cpp, hw->nfp_idx, 0);
+ if (rte_eal_process_type() == RTE_PROC_PRIMARY)
+ /* Configure the physical port down */
+- return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
++ ret = nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);
+ else
+- return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
++ ret = nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);
@@ -62 +70 @@
- static void
+ static uint8_t
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: fix pause frame setting check' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (97 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
` (21 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chaoyong He; +Cc: xuemingl, Long Wu, Peng Zhang, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=bffff80bf5f8d18afac9d8bb62cd32aa4f5d14b5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From bffff80bf5f8d18afac9d8bb62cd32aa4f5d14b5 Mon Sep 17 00:00:00 2001
From: Chaoyong He <chaoyong.he@corigine.com>
Date: Sat, 12 Oct 2024 10:41:04 +0800
Subject: [PATCH] net/nfp: fix pause frame setting check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4bb6de512fbc361e16d5a7a38b704735c831540d ]
The return value of 'nfp_eth_config_commit_end()' is three ways, the
original logic considered it as two ways wrongly.
Fixes: 68aa35373a94 ("net/nfp: support setting pause frame switch mode")
Signed-off-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/nfp/nfp_net_common.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 08cb5f7d3b..134a9b807e 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -2218,7 +2218,7 @@ nfp_net_pause_frame_set(struct nfp_net_hw *net_hw,
}
err = nfp_eth_config_commit_end(nsp);
- if (err != 0) {
+ if (err < 0) {
PMD_DRV_LOG(ERR, "Failed to configure pause frame.");
return err;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.591013149 +0800
+++ 0099-net-nfp-fix-pause-frame-setting-check.patch 2024-11-11 14:23:05.262192837 +0800
@@ -1 +1 @@
-From 4bb6de512fbc361e16d5a7a38b704735c831540d Mon Sep 17 00:00:00 2001
+From bffff80bf5f8d18afac9d8bb62cd32aa4f5d14b5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4bb6de512fbc361e16d5a7a38b704735c831540d ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 80d60515d8..5c3a9a7ae7 100644
+index 08cb5f7d3b..134a9b807e 100644
@@ -24 +26 @@
-@@ -2520,7 +2520,7 @@ nfp_net_pause_frame_set(struct nfp_net_hw_priv *hw_priv,
+@@ -2218,7 +2218,7 @@ nfp_net_pause_frame_set(struct nfp_net_hw *net_hw,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/pcap: fix blocking Rx' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (98 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
` (20 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: xuemingl, Ofer Dagan, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=586405a26eab30b8ba04b54197aee639f89410d0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 586405a26eab30b8ba04b54197aee639f89410d0 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 5 Sep 2024 09:10:40 -0700
Subject: [PATCH] net/pcap: fix blocking Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f5ead8f84f205babb320a1d805fb436ba31a5532 ]
Use pcap_next_ex rather than just pcap_next because pcap_next
always blocks if there is no packets to receive.
Bugzilla ID: 1526
Fixes: 4c173302c307 ("pcap: add new driver")
Reported-by: Ofer Dagan <ofer.d@claroty.com>
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Tested-by: Ofer Dagan <ofer.d@claroty.com>
---
.mailmap | 1 +
drivers/net/pcap/pcap_ethdev.c | 33 +++++++++++++++++----------------
2 files changed, 18 insertions(+), 16 deletions(-)
diff --git a/.mailmap b/.mailmap
index 8d7fa55d9e..8a0ce733c3 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1059,6 +1059,7 @@ Noa Ezra <noae@mellanox.com>
Nobuhiro Miki <nmiki@yahoo-corp.jp>
Norbert Ciosek <norbertx.ciosek@intel.com>
Odi Assli <odia@nvidia.com>
+Ofer Dagan <ofer.d@claroty.com>
Ognjen Joldzic <ognjen.joldzic@gmail.com>
Ola Liljedahl <ola.liljedahl@arm.com>
Oleg Polyakov <olegp123@walla.co.il>
diff --git a/drivers/net/pcap/pcap_ethdev.c b/drivers/net/pcap/pcap_ethdev.c
index 1fb98e3d2b..728ef85d53 100644
--- a/drivers/net/pcap/pcap_ethdev.c
+++ b/drivers/net/pcap/pcap_ethdev.c
@@ -274,7 +274,7 @@ static uint16_t
eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
{
unsigned int i;
- struct pcap_pkthdr header;
+ struct pcap_pkthdr *header;
struct pmd_process_private *pp;
const u_char *packet;
struct rte_mbuf *mbuf;
@@ -294,9 +294,13 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
*/
for (i = 0; i < nb_pkts; i++) {
/* Get the next PCAP packet */
- packet = pcap_next(pcap, &header);
- if (unlikely(packet == NULL))
+ int ret = pcap_next_ex(pcap, &header, &packet);
+ if (ret != 1) {
+ if (ret == PCAP_ERROR)
+ pcap_q->rx_stat.err_pkts++;
+
break;
+ }
mbuf = rte_pktmbuf_alloc(pcap_q->mb_pool);
if (unlikely(mbuf == NULL)) {
@@ -304,33 +308,30 @@ eth_pcap_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
break;
}
- if (header.caplen <= rte_pktmbuf_tailroom(mbuf)) {
+ uint32_t len = header->caplen;
+ if (len <= rte_pktmbuf_tailroom(mbuf)) {
/* pcap packet will fit in the mbuf, can copy it */
- rte_memcpy(rte_pktmbuf_mtod(mbuf, void *), packet,
- header.caplen);
- mbuf->data_len = (uint16_t)header.caplen;
+ rte_memcpy(rte_pktmbuf_mtod(mbuf, void *), packet, len);
+ mbuf->data_len = len;
} else {
/* Try read jumbo frame into multi mbufs. */
if (unlikely(eth_pcap_rx_jumbo(pcap_q->mb_pool,
- mbuf,
- packet,
- header.caplen) == -1)) {
+ mbuf, packet, len) == -1)) {
pcap_q->rx_stat.err_pkts++;
rte_pktmbuf_free(mbuf);
break;
}
}
- mbuf->pkt_len = (uint16_t)header.caplen;
- *RTE_MBUF_DYNFIELD(mbuf, timestamp_dynfield_offset,
- rte_mbuf_timestamp_t *) =
- (uint64_t)header.ts.tv_sec * 1000000 +
- header.ts.tv_usec;
+ mbuf->pkt_len = len;
+ uint64_t us = (uint64_t)header->ts.tv_sec * US_PER_S + header->ts.tv_usec;
+
+ *RTE_MBUF_DYNFIELD(mbuf, timestamp_dynfield_offset, rte_mbuf_timestamp_t *) = us;
mbuf->ol_flags |= timestamp_rx_dynflag;
mbuf->port = pcap_q->port_id;
bufs[num_rx] = mbuf;
num_rx++;
- rx_bytes += header.caplen;
+ rx_bytes += len;
}
pcap_q->rx_stat.pkts += num_rx;
pcap_q->rx_stat.bytes += rx_bytes;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.636059148 +0800
+++ 0100-net-pcap-fix-blocking-Rx.patch 2024-11-11 14:23:05.262192837 +0800
@@ -1 +1 @@
-From f5ead8f84f205babb320a1d805fb436ba31a5532 Mon Sep 17 00:00:00 2001
+From 586405a26eab30b8ba04b54197aee639f89410d0 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f5ead8f84f205babb320a1d805fb436ba31a5532 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 3e9e8b416e..bd7958652a 100644
+index 8d7fa55d9e..8a0ce733c3 100644
@@ -25 +27,2 @@
-@@ -1103,6 +1103,7 @@ Nobuhiro Miki <nmiki@yahoo-corp.jp>
+@@ -1059,6 +1059,7 @@ Noa Ezra <noae@mellanox.com>
+ Nobuhiro Miki <nmiki@yahoo-corp.jp>
@@ -27 +29,0 @@
- Norbert Zulinski <norbertx.zulinski@intel.com>
@@ -32 +34 @@
- Oleg Akhrem <oleg.akhrem@intel.com>
+ Oleg Polyakov <olegp123@walla.co.il>
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ice/base: add bounds check' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (99 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
` (19 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Fabio Pricoco; +Cc: xuemingl, Bruce Richardson, Vladimir Medvedkin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0df9a046c7c8315e7ec101eca515c5ea7a5b44e7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0df9a046c7c8315e7ec101eca515c5ea7a5b44e7 Mon Sep 17 00:00:00 2001
From: Fabio Pricoco <fabio.pricoco@intel.com>
Date: Mon, 14 Oct 2024 12:02:06 +0100
Subject: [PATCH] net/ice/base: add bounds check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9378aa47f45fa5cd5be219c8eb770f096e8a4c27 ]
Refactor while loop to add a check that the values read are in the
correct range.
Fixes: 6c1f26be50a2 ("net/ice/base: add control queue information")
Signed-off-by: Fabio Pricoco <fabio.pricoco@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/ice/base/ice_controlq.c | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_controlq.c b/drivers/net/ice/base/ice_controlq.c
index c34407b48c..4896fd2731 100644
--- a/drivers/net/ice/base/ice_controlq.c
+++ b/drivers/net/ice/base/ice_controlq.c
@@ -846,12 +846,23 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
u16 ntc = sq->next_to_clean;
struct ice_sq_cd *details;
struct ice_aq_desc *desc;
+ u32 head;
desc = ICE_CTL_Q_DESC(*sq, ntc);
details = ICE_CTL_Q_DETAILS(*sq, ntc);
- while (rd32(hw, cq->sq.head) != ntc) {
- ice_debug(hw, ICE_DBG_AQ_MSG, "ntc %d head %d.\n", ntc, rd32(hw, cq->sq.head));
+ head = rd32(hw, sq->head);
+ if (head >= sq->count) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Read head value (%d) exceeds allowed range.\n",
+ head);
+ return 0;
+ }
+
+ while (head != ntc) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "ntc %d head %d.\n",
+ ntc, head);
ice_memset(desc, 0, sizeof(*desc), ICE_DMA_MEM);
ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
ntc++;
@@ -859,6 +870,14 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
ntc = 0;
desc = ICE_CTL_Q_DESC(*sq, ntc);
details = ICE_CTL_Q_DETAILS(*sq, ntc);
+
+ head = rd32(hw, sq->head);
+ if (head >= sq->count) {
+ ice_debug(hw, ICE_DBG_AQ_MSG,
+ "Read head value (%d) exceeds allowed range.\n",
+ head);
+ return 0;
+ }
}
sq->next_to_clean = ntc;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.692233347 +0800
+++ 0101-net-ice-base-add-bounds-check.patch 2024-11-11 14:23:05.282192836 +0800
@@ -1 +1 @@
-From 9378aa47f45fa5cd5be219c8eb770f096e8a4c27 Mon Sep 17 00:00:00 2001
+From 0df9a046c7c8315e7ec101eca515c5ea7a5b44e7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9378aa47f45fa5cd5be219c8eb770f096e8a4c27 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index af27dc8542..b210495827 100644
+index c34407b48c..4896fd2731 100644
@@ -23,2 +25 @@
-@@ -839,16 +839,35 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
- struct ice_ctl_q_ring *sq = &cq->sq;
+@@ -846,12 +846,23 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -25,0 +27 @@
+ struct ice_sq_cd *details;
@@ -29,0 +32 @@
+ details = ICE_CTL_Q_DETAILS(*sq, ntc);
@@ -45,0 +49 @@
+ ice_memset(details, 0, sizeof(*details), ICE_NONDMA_MEM);
@@ -47 +51 @@
- if (ntc == sq->count)
+@@ -859,6 +870,14 @@ static u16 ice_clean_sq(struct ice_hw *hw, struct ice_ctl_q_info *cq)
@@ -49,0 +54 @@
+ details = ICE_CTL_Q_DETAILS(*sq, ntc);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ice/base: fix VLAN replay after reset' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (100 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
` (18 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Dave Ertman
Cc: xuemingl, Jacob Keller, Bruce Richardson, Vladimir Medvedkin,
dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7a744b7e5badcfdc620df91c533d2b4d2abd068a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7a744b7e5badcfdc620df91c533d2b4d2abd068a Mon Sep 17 00:00:00 2001
From: Dave Ertman <david.m.ertman@intel.com>
Date: Mon, 14 Oct 2024 12:02:07 +0100
Subject: [PATCH] net/ice/base: fix VLAN replay after reset
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8e191a67df2d217c2cbd96325b38bf2f5f028f03 ]
If there is more than one VLAN defined when any reset that affects the
PF is initiated, after the reset rebuild, no traffic will pass on any
VLAN but the last one created.
This is caused by the iteration though the VLANs during replay each
clearing the vsi_map bitmap of the VSI that is being replayed. The
problem is that during the replay, the pointer to the vsi_map bitmap is
used by each successive vlan to determine if it should be replayed on
this VSI.
The logic was that the replay of the VLAN would replace the bit in the
map before the next VLAN would iterate through. But, since the replay
copies the old bitmap pointer to filt_replay_rules and creates a new one
for the recreated VLANS, it does not do this, and leaves the old bitmap
broken to be used to replay the remaining VLANs.
Since the old bitmap will be cleaned up in post replay cleanup, there is
no need to alter it and break following VLAN replay, so don't clear the
bit.
Fixes: c7dd15931183 ("net/ice/base: add virtual switch code")
Signed-off-by: Dave Ertman <david.m.ertman@intel.com>
Signed-off-by: Jacob Keller <jacob.e.keller@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
drivers/net/ice/base/ice_switch.c | 2 --
1 file changed, 2 deletions(-)
diff --git a/drivers/net/ice/base/ice_switch.c b/drivers/net/ice/base/ice_switch.c
index c4fd07199e..7b103e5e34 100644
--- a/drivers/net/ice/base/ice_switch.c
+++ b/drivers/net/ice/base/ice_switch.c
@@ -10023,8 +10023,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi,
if (!itr->vsi_list_info ||
!ice_is_bit_set(itr->vsi_list_info->vsi_map, vsi_handle))
continue;
- /* Clearing it so that the logic can add it back */
- ice_clear_bit(vsi_handle, itr->vsi_list_info->vsi_map);
f_entry.fltr_info.vsi_handle = vsi_handle;
f_entry.fltr_info.fltr_act = ICE_FWD_TO_VSI;
/* update the src in case it is VSI num */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.763788646 +0800
+++ 0102-net-ice-base-fix-VLAN-replay-after-reset.patch 2024-11-11 14:23:05.292192836 +0800
@@ -1 +1 @@
-From 8e191a67df2d217c2cbd96325b38bf2f5f028f03 Mon Sep 17 00:00:00 2001
+From 7a744b7e5badcfdc620df91c533d2b4d2abd068a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8e191a67df2d217c2cbd96325b38bf2f5f028f03 ]
@@ -27 +29,0 @@
-Cc: stable@dpdk.org
@@ -38 +40 @@
-index 96ef26d535..a3786961e6 100644
+index c4fd07199e..7b103e5e34 100644
@@ -41 +43 @@
-@@ -10110,8 +10110,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi,
+@@ -10023,8 +10023,6 @@ ice_replay_vsi_fltr(struct ice_hw *hw, struct ice_port_info *pi,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/iavf: preserve MAC address with i40e PF Linux driver' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (101 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
` (17 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: David Marchand; +Cc: xuemingl, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=819d57cd27c47941913084143eb32d1cb3a8bf20
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 819d57cd27c47941913084143eb32d1cb3a8bf20 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Tue, 1 Oct 2024 11:12:54 +0200
Subject: [PATCH] net/iavf: preserve MAC address with i40e PF Linux driver
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3d42086def307be853d1e2e5b9d1e76725c3661f ]
Following two upstream Linux kernel changes (see links), the mac address
of a iavf port, serviced by a i40e PF driver, is lost when the DPDK iavf
driver probes the port again (which may be triggered at any point of a
DPDK application life, like when a reset event is triggered by the PF).
A first change results in the mac address of the VF port being reset to 0
during the VIRTCHNL_OP_GET_VF_RESOURCES query.
The i40e PF driver change is pretty obscure but the iavf Linux driver does
set VIRTCHNL_VF_OFFLOAD_USO.
Announcing such a capability in the DPDK driver does not seem to be an
issue, so do the same in DPDK to keep the legacy behavior of a fixed mac.
Then a second change in the kernel results in the VF mac address being
cleared when the VF driver remove its default mac address.
Removing (unicast or multicast) mac addresses is not done by the kernel VF
driver in general.
The reason why the DPDK driver behaves like this is undocumented
(and lost because the authors are not active anymore).
Aligning DPDK behavior to the upstream kernel driver is safer in any
case.
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=fed0d9f13266
Link: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=ceb29474bbbc
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/iavf/iavf_ethdev.c | 22 +++++-----------------
drivers/net/iavf/iavf_vchnl.c | 1 +
2 files changed, 6 insertions(+), 17 deletions(-)
diff --git a/drivers/net/iavf/iavf_ethdev.c b/drivers/net/iavf/iavf_ethdev.c
index 1a98c7734c..9f3658c48b 100644
--- a/drivers/net/iavf/iavf_ethdev.c
+++ b/drivers/net/iavf/iavf_ethdev.c
@@ -1033,7 +1033,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_configure_queues(adapter,
IAVF_CFG_Q_NUM_PER_BUF, index) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
- goto err_queue;
+ goto error;
}
num_queue_pairs -= IAVF_CFG_Q_NUM_PER_BUF;
index += IAVF_CFG_Q_NUM_PER_BUF;
@@ -1041,12 +1041,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_configure_queues(adapter, num_queue_pairs, index) != 0) {
PMD_DRV_LOG(ERR, "configure queues failed");
- goto err_queue;
+ goto error;
}
if (iavf_config_rx_queues_irqs(dev, intr_handle) != 0) {
PMD_DRV_LOG(ERR, "configure irq failed");
- goto err_queue;
+ goto error;
}
/* re-enable intr again, because efd assign may change */
if (dev->data->dev_conf.intr_conf.rxq != 0) {
@@ -1066,14 +1066,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
if (iavf_start_queues(dev) != 0) {
PMD_DRV_LOG(ERR, "enable queues failed");
- goto err_mac;
+ goto error;
}
return 0;
-err_mac:
- iavf_add_del_all_mac_addr(adapter, false);
-err_queue:
+error:
return -1;
}
@@ -1102,16 +1100,6 @@ iavf_dev_stop(struct rte_eth_dev *dev)
/* Rx interrupt vector mapping free */
rte_intr_vec_list_free(intr_handle);
- /* adminq will be disabled when vf is resetting. */
- if (!vf->in_reset_recovery) {
- /* remove all mac addrs */
- iavf_add_del_all_mac_addr(adapter, false);
-
- /* remove all multicast addresses */
- iavf_add_del_mc_addr_list(adapter, vf->mc_addrs, vf->mc_addrs_num,
- false);
- }
-
iavf_stop_queues(dev);
adapter->stopped = 1;
diff --git a/drivers/net/iavf/iavf_vchnl.c b/drivers/net/iavf/iavf_vchnl.c
index 8ca104c04e..71be87845a 100644
--- a/drivers/net/iavf/iavf_vchnl.c
+++ b/drivers/net/iavf/iavf_vchnl.c
@@ -710,6 +710,7 @@ iavf_get_vf_resource(struct iavf_adapter *adapter)
VIRTCHNL_VF_OFFLOAD_ADV_RSS_PF |
VIRTCHNL_VF_OFFLOAD_FSUB_PF |
VIRTCHNL_VF_OFFLOAD_REQ_QUEUES |
+ VIRTCHNL_VF_OFFLOAD_USO |
VIRTCHNL_VF_OFFLOAD_CRC |
VIRTCHNL_VF_OFFLOAD_VLAN_V2 |
VIRTCHNL_VF_LARGE_NUM_QPAIRS |
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.886694643 +0800
+++ 0103-net-iavf-preserve-MAC-address-with-i40e-PF-Linux-dri.patch 2024-11-11 14:23:05.292192836 +0800
@@ -1 +1 @@
-From 3d42086def307be853d1e2e5b9d1e76725c3661f Mon Sep 17 00:00:00 2001
+From 819d57cd27c47941913084143eb32d1cb3a8bf20 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3d42086def307be853d1e2e5b9d1e76725c3661f ]
@@ -27,2 +29,0 @@
-Cc: stable@dpdk.org
-
@@ -40 +41 @@
-index c200f63b4f..7f80cd6258 100644
+index 1a98c7734c..9f3658c48b 100644
@@ -43 +44 @@
-@@ -1044,7 +1044,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
+@@ -1033,7 +1033,7 @@ iavf_dev_start(struct rte_eth_dev *dev)
@@ -52 +53 @@
-@@ -1052,12 +1052,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
+@@ -1041,12 +1041,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
@@ -67 +68 @@
-@@ -1077,14 +1077,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
+@@ -1066,14 +1066,12 @@ iavf_dev_start(struct rte_eth_dev *dev)
@@ -84 +85 @@
-@@ -1113,16 +1111,6 @@ iavf_dev_stop(struct rte_eth_dev *dev)
+@@ -1102,16 +1100,6 @@ iavf_dev_stop(struct rte_eth_dev *dev)
@@ -102 +103 @@
-index 69420bc9b6..065ab3594c 100644
+index 8ca104c04e..71be87845a 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: workaround list management of Rx queue control' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (102 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
` (16 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Bing Zhao; +Cc: xuemingl, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=789faccc7ec15de2004468416d46ea1184c68f25
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 789faccc7ec15de2004468416d46ea1184c68f25 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Tue, 23 Jul 2024 14:14:11 +0300
Subject: [PATCH] net/mlx5: workaround list management of Rx queue control
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f957ac99643535fd218753f4f956fc9c5aadd23c ]
The LIST_REMOVE macro only removes the entry from the list and
updates list itself. The pointers of this entry are not reset to
NULL to prevent the accessing for the 2nd time.
In the previous fix for the memory accessing, the "rxq_ctrl" was
removed from the list in a device private data when the "refcnt" was
decreased to 0. Under only shared or non-shared queues scenarios,
this was safe since all the "rxq_ctrl" entries were freed or kept.
There is one case that shared and non-shared Rx queues are configured
simultaneously, for example, a hairpin Rx queue cannot be shared.
When closing the port that allocated the shared Rx queues'
"rxq_ctrl", if the next entry is hairpin "rxq_ctrl", the hairpin
"rxq_ctrl" will be freed directly with other resources. When trying
to close the another port sharing the "rxq_ctrl", the LIST_REMOVE
will be called again and cause some UFA issue. If the memory is no
longer mapped, there will be a SIGSEGV.
Adding a flag in the Rx queue private structure to remove the
"rxq_ctrl" from the list only on the port/queue that allocated it.
Fixes: bcc220cb57d7 ("net/mlx5: fix shared Rx queue list management")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_rx.h | 1 +
drivers/net/mlx5/mlx5_rxq.c | 5 ++++-
2 files changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index d0ceae72ea..08ab0a042d 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -173,6 +173,7 @@ struct mlx5_rxq_ctrl {
/* RX queue private data. */
struct mlx5_rxq_priv {
uint16_t idx; /* Queue index. */
+ bool possessor; /* Shared rxq_ctrl allocated for the 1st time. */
uint32_t refcnt; /* Reference counter. */
struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 1bb036afeb..e45cca9133 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -938,6 +938,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rte_errno = ENOMEM;
return -rte_errno;
}
+ rxq->possessor = true;
}
rxq->priv = priv;
rxq->idx = idx;
@@ -2016,6 +2017,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
tmpl->rxq.idx = idx;
rxq->hairpin_conf = *hairpin_conf;
+ rxq->possessor = true;
mlx5_rxq_ref(dev, idx);
LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
return tmpl;
@@ -2283,7 +2285,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
RTE_ETH_QUEUE_STATE_STOPPED;
}
} else { /* Refcnt zero, closing device. */
- LIST_REMOVE(rxq_ctrl, next);
+ if (rxq->possessor)
+ LIST_REMOVE(rxq_ctrl, next);
LIST_REMOVE(rxq, owner_entry);
if (LIST_EMPTY(&rxq_ctrl->owners)) {
if (!rxq_ctrl->is_hairpin)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:09.970133241 +0800
+++ 0104-net-mlx5-workaround-list-management-of-Rx-queue-cont.patch 2024-11-11 14:23:05.292192836 +0800
@@ -1 +1 @@
-From f957ac99643535fd218753f4f956fc9c5aadd23c Mon Sep 17 00:00:00 2001
+From 789faccc7ec15de2004468416d46ea1184c68f25 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f957ac99643535fd218753f4f956fc9c5aadd23c ]
@@ -28 +30,0 @@
-Cc: stable@dpdk.org
@@ -38 +40 @@
-index 7d144921ab..9bcb43b007 100644
+index d0ceae72ea..08ab0a042d 100644
@@ -46 +48 @@
- RTE_ATOMIC(uint32_t) refcnt; /* Reference counter. */
+ uint32_t refcnt; /* Reference counter. */
@@ -50 +52 @@
-index f13fc3b353..c6655b7db4 100644
+index 1bb036afeb..e45cca9133 100644
@@ -61 +63 @@
-@@ -2015,6 +2016,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+@@ -2016,6 +2017,7 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
@@ -69 +71 @@
-@@ -2282,7 +2284,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+@@ -2283,7 +2285,8 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5/hws: fix flex item as tunnel header' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (103 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
` (15 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f64e2a1e86b8b85627b7ce9563278eff71e26c8b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f64e2a1e86b8b85627b7ce9563278eff71e26c8b Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:17 +0300
Subject: [PATCH] net/mlx5/hws: fix flex item as tunnel header
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 624ca89b57550f13c49224d931d391680dc62d69 ]
The RTE flex item can represent the tunnel header and
split the inner and outer layer items. HWS did not
support this flex item specifics.
Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 13 +++++++++++--
1 file changed, 11 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 031e87bc0c..1b8cb18d63 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -2536,8 +2536,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
break;
case RTE_FLOW_ITEM_TYPE_FLEX:
ret = mlx5dr_definer_conv_item_flex_parser(&cd, items, i);
- item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
- MLX5_FLOW_ITEM_OUTER_FLEX;
+ if (ret == 0) {
+ enum rte_flow_item_flex_tunnel_mode tunnel_mode =
+ FLEX_TUNNEL_MODE_SINGLE;
+
+ ret = mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+ if (tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL)
+ item_flags |= MLX5_FLOW_ITEM_FLEX_TUNNEL;
+ else
+ item_flags |= cd.tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+ MLX5_FLOW_ITEM_OUTER_FLEX;
+ }
break;
case RTE_FLOW_ITEM_TYPE_MPLS:
ret = mlx5dr_definer_conv_item_mpls(&cd, items, i);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.026742840 +0800
+++ 0105-net-mlx5-hws-fix-flex-item-as-tunnel-header.patch 2024-11-11 14:23:05.302192836 +0800
@@ -1 +1 @@
-From 624ca89b57550f13c49224d931d391680dc62d69 Mon Sep 17 00:00:00 2001
+From f64e2a1e86b8b85627b7ce9563278eff71e26c8b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 624ca89b57550f13c49224d931d391680dc62d69 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 51a3f7be4b..2dfcc5eba6 100644
+index 031e87bc0c..1b8cb18d63 100644
@@ -23 +25 @@
-@@ -3267,8 +3267,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
+@@ -2536,8 +2536,17 @@ mlx5dr_definer_conv_items_to_hl(struct mlx5dr_context *ctx,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: add flex item query for tunnel mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (104 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
` (14 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=cb33b62e095707f9ad4c014f83e60359f06af2ec
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From cb33b62e095707f9ad4c014f83e60359f06af2ec Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:16 +0300
Subject: [PATCH] net/mlx5: add flex item query for tunnel mode
Cc: Xueming Li <xuemingl@nvidia.com>
[upstream commit 850233aca685ed1142ae2003ec6d4eefe82df4bd]
Once parsing the RTE item array the PMD needs to know
whether the flex item represents the tunnel header.
The appropriate tunnel mode query API is added.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 2 ++
drivers/net/mlx5/mlx5_flow_flex.c | 27 +++++++++++++++++++++++++++
2 files changed, 29 insertions(+)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 0c81bcab9f..f48b58dcf4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2498,6 +2498,8 @@ int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
void *flex, uint32_t byte_off,
bool is_mask, bool tunnel, uint32_t *value);
+int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+ enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
struct rte_flow_item_flex_handle *handle,
bool acquire);
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index 4ae03a23f1..e7e6358144 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -291,6 +291,33 @@ mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
return 0;
}
+/**
+ * Get the flex parser tunnel mode.
+ *
+ * @param[in] item
+ * RTE Flex item.
+ * @param[in, out] tunnel_mode
+ * Pointer to return tunnel mode.
+ *
+ * @return
+ * 0 on success, otherwise negative error code.
+ */
+int
+mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
+ enum rte_flow_item_flex_tunnel_mode *tunnel_mode)
+{
+ if (item && item->spec && tunnel_mode) {
+ const struct rte_flow_item_flex *spec = item->spec;
+ struct mlx5_flex_item *flex = (struct mlx5_flex_item *)spec->handle;
+
+ if (flex) {
+ *tunnel_mode = flex->tunnel_mode;
+ return 0;
+ }
+ }
+ return -EINVAL;
+}
+
/**
* Translate item pattern into matcher fields according to translation
* array.
--
2.34.1
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix flex item tunnel mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (105 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
` (13 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=06b70793f9f1885d71895b16d648be571c8045c1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 06b70793f9f1885d71895b16d648be571c8045c1 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:18 +0300
Subject: [PATCH] net/mlx5: fix flex item tunnel mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e46b26663de964b54ed9fc2e7eade07261d8e396 ]
The RTE flex item can represent tunnel header itself,
and split inner and outer items, it should be reflected
in the item flags while PMD is processing the item array.
Fixes: 8c0ca7527bc8 ("net/mlx5/hws: support flex item matching")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index af4df13b2f..ca2611942e 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -394,6 +394,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
uint64_t last_item = 0;
for (; items->type != RTE_FLOW_ITEM_TYPE_END; items++) {
+ enum rte_flow_item_flex_tunnel_mode tunnel_mode = FLEX_TUNNEL_MODE_SINGLE;
int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
int item_type = items->type;
@@ -439,6 +440,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
case RTE_FLOW_ITEM_TYPE_GTP:
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ break;
+ case RTE_FLOW_ITEM_TYPE_FLEX:
+ mlx5_flex_get_tunnel_mode(items, &tunnel_mode);
+ last_item = tunnel_mode == FLEX_TUNNEL_MODE_TUNNEL ?
+ MLX5_FLOW_ITEM_FLEX_TUNNEL :
+ tunnel ? MLX5_FLOW_ITEM_INNER_FLEX :
+ MLX5_FLOW_ITEM_OUTER_FLEX;
default:
break;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.103428639 +0800
+++ 0107-net-mlx5-fix-flex-item-tunnel-mode.patch 2024-11-11 14:23:05.312192836 +0800
@@ -1 +1 @@
-From e46b26663de964b54ed9fc2e7eade07261d8e396 Mon Sep 17 00:00:00 2001
+From 06b70793f9f1885d71895b16d648be571c8045c1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e46b26663de964b54ed9fc2e7eade07261d8e396 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 279ffdc03a..986fd8a93d 100644
+index af4df13b2f..ca2611942e 100644
@@ -23 +25 @@
-@@ -558,6 +558,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
+@@ -394,6 +394,7 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
@@ -31,3 +33,3 @@
-@@ -606,6 +607,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
- case RTE_FLOW_ITEM_TYPE_COMPARE:
- last_item = MLX5_FLOW_ITEM_COMPARE;
+@@ -439,6 +440,13 @@ flow_hw_matching_item_flags_get(const struct rte_flow_item items[])
+ case RTE_FLOW_ITEM_TYPE_GTP:
+ last_item = MLX5_FLOW_LAYER_GTP;
@@ -34,0 +37 @@
++ break;
@@ -41 +43,0 @@
-+ break;
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix number of supported flex parsers' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (106 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
` (12 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2ab85e98b8bf7c5e6f56d85a9e40f97106da4550
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2ab85e98b8bf7c5e6f56d85a9e40f97106da4550 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:19 +0300
Subject: [PATCH] net/mlx5: fix number of supported flex parsers
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 16d8f37b4ebb59a2b2d48dbd9c0f3b8302d4ab1f ]
The hardware supports up to 8 flex parser configurations.
Some of them can be utilized internally by firmware, depending on
the configured profile ("FLEX_PARSER_PROFILE_ENABLE" in NV-setting).
The firmware does not report in capabilities how many flex parser
configuration is remaining available (this is device-wide resource
and can be allocated runtime by other agents - kernel, DPDK
applications, etc.), and once there is no more available parsers
on the parse object creation moment firmware just returns an error.
Fixes: db25cadc0887 ("net/mlx5: add flex item operations")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f48b58dcf4..61f07f459b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -69,7 +69,7 @@
#define MLX5_ROOT_TBL_MODIFY_NUM 16
/* Maximal number of flex items created on the port.*/
-#define MLX5_PORT_FLEX_ITEM_NUM 4
+#define MLX5_PORT_FLEX_ITEM_NUM 8
/* Maximal number of field/field parts to map into sample registers .*/
#define MLX5_FLEX_ITEM_MAPPING_NUM 32
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.153862138 +0800
+++ 0108-net-mlx5-fix-number-of-supported-flex-parsers.patch 2024-11-11 14:23:05.312192836 +0800
@@ -1 +1 @@
-From 16d8f37b4ebb59a2b2d48dbd9c0f3b8302d4ab1f Mon Sep 17 00:00:00 2001
+From 2ab85e98b8bf7c5e6f56d85a9e40f97106da4550 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 16d8f37b4ebb59a2b2d48dbd9c0f3b8302d4ab1f ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 5f7bfcd613..399923b443 100644
+index f48b58dcf4..61f07f459b 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/testpmd: remove flex item init command leftover' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (107 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
` (11 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=179f82ff012e353903bad9c9b0451b09c6941600
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 179f82ff012e353903bad9c9b0451b09c6941600 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:20 +0300
Subject: [PATCH] app/testpmd: remove flex item init command leftover
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d5c50397a1cc06419970afbea9cd1c37e3c08a5b ]
There was a leftover of "flow flex init" command used
for debug purposes and had no useful functionality in
the production code.
Fixes: 59f3a8acbcdb ("app/testpmd: add flex item commands")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
app/test-pmd/cmdline_flow.c | 12 ------------
1 file changed, 12 deletions(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 7e6e06a04f..4b13d84ad1 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -105,7 +105,6 @@ enum index {
HASH,
/* Flex arguments */
- FLEX_ITEM_INIT,
FLEX_ITEM_CREATE,
FLEX_ITEM_DESTROY,
@@ -1249,7 +1248,6 @@ struct parse_action_priv {
})
static const enum index next_flex_item[] = {
- FLEX_ITEM_INIT,
FLEX_ITEM_CREATE,
FLEX_ITEM_DESTROY,
ZERO,
@@ -3932,15 +3930,6 @@ static const struct token token_list[] = {
.next = NEXT(next_flex_item),
.call = parse_flex,
},
- [FLEX_ITEM_INIT] = {
- .name = "init",
- .help = "flex item init",
- .args = ARGS(ARGS_ENTRY(struct buffer, args.flex.token),
- ARGS_ENTRY(struct buffer, port)),
- .next = NEXT(NEXT_ENTRY(COMMON_FLEX_TOKEN),
- NEXT_ENTRY(COMMON_PORT_ID)),
- .call = parse_flex
- },
[FLEX_ITEM_CREATE] = {
.name = "create",
.help = "flex item create",
@@ -10720,7 +10709,6 @@ parse_flex(struct context *ctx, const struct token *token,
switch (ctx->curr) {
default:
break;
- case FLEX_ITEM_INIT:
case FLEX_ITEM_CREATE:
case FLEX_ITEM_DESTROY:
out->command = ctx->curr;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.191913637 +0800
+++ 0109-app-testpmd-remove-flex-item-init-command-leftover.patch 2024-11-11 14:23:05.322192836 +0800
@@ -1 +1 @@
-From d5c50397a1cc06419970afbea9cd1c37e3c08a5b Mon Sep 17 00:00:00 2001
+From 179f82ff012e353903bad9c9b0451b09c6941600 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d5c50397a1cc06419970afbea9cd1c37e3c08a5b ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 5451b3a453..5f71e5ba44 100644
+index 7e6e06a04f..4b13d84ad1 100644
@@ -23 +25 @@
-@@ -106,7 +106,6 @@ enum index {
+@@ -105,7 +105,6 @@ enum index {
@@ -31 +33 @@
-@@ -1320,7 +1319,6 @@ struct parse_action_priv {
+@@ -1249,7 +1248,6 @@ struct parse_action_priv {
@@ -39 +41 @@
-@@ -4188,15 +4186,6 @@ static const struct token token_list[] = {
+@@ -3932,15 +3930,6 @@ static const struct token token_list[] = {
@@ -55 +57 @@
-@@ -11472,7 +11461,6 @@ parse_flex(struct context *ctx, const struct token *token,
+@@ -10720,7 +10709,6 @@ parse_flex(struct context *ctx, const struct token *token,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix next protocol validation after flex item' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (108 preceding siblings ...)
2024-11-11 6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
` (10 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e03621f4c4a16e42b913431bcb86caecd3240865
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e03621f4c4a16e42b913431bcb86caecd3240865 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:21 +0300
Subject: [PATCH] net/mlx5: fix next protocol validation after flex item
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3847a3b192315491118eab9830e695eb2c9946e2 ]
On the flow validation some items may check the preceding protocols.
In case of flex item the next protocol is opaque (or can be multiple
ones) we should set neutral value and allow successful validation,
for example, for the combination of flex and following ESP items.
Fixes: a23e9b6e3ee9 ("net/mlx5: handle flex item in flows")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 863737ceba..af6bf7e411 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7850,6 +7850,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
tunnel != 0, error);
if (ret < 0)
return ret;
+ /* Reset for next proto, it is unknown. */
+ next_protocol = 0xff;
break;
case RTE_FLOW_ITEM_TYPE_METER_COLOR:
ret = flow_dv_validate_item_meter_color(dev, items,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.231892736 +0800
+++ 0110-net-mlx5-fix-next-protocol-validation-after-flex-ite.patch 2024-11-11 14:23:05.332192835 +0800
@@ -1 +1 @@
-From 3847a3b192315491118eab9830e695eb2c9946e2 Mon Sep 17 00:00:00 2001
+From e03621f4c4a16e42b913431bcb86caecd3240865 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3847a3b192315491118eab9830e695eb2c9946e2 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index e8ca2b3ed6..4451b114ae 100644
+index 863737ceba..af6bf7e411 100644
@@ -24 +26 @@
-@@ -8194,6 +8194,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
+@@ -7850,6 +7850,8 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix non full word sample fields in flex item' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (109 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
` (9 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=86088e4684e1c908882cb293d28d3b85226535dc
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 86088e4684e1c908882cb293d28d3b85226535dc Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:22 +0300
Subject: [PATCH] net/mlx5: fix non full word sample fields in flex item
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 97e19f0762e5235d6914845a59823d4ea36925bb ]
If the sample field in flex item did not cover the entire
32-bit word (width was not verified 32 bits) or was not aligned
on the byte boundary the match on this sample in flows
happened to be ignored or wrongly missed. The field mask
"def" was build in wrong endianness, and non-byte aligned
shifts were wrongly performed for the pattern masks and values.
Fixes: 6dac7d7ff2bf ("net/mlx5: translate flex item pattern into matcher")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 4 +--
drivers/net/mlx5/mlx5.h | 5 ++-
drivers/net/mlx5/mlx5_flow_dv.c | 5 ++-
drivers/net/mlx5/mlx5_flow_flex.c | 47 +++++++++++++--------------
4 files changed, 29 insertions(+), 32 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index 1b8cb18d63..daee2b6eb7 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -436,7 +436,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
idx = fc->fname - MLX5DR_DEFINER_FNAME_FLEX_PARSER_0;
byte_off -= idx * sizeof(uint32_t);
ret = mlx5_flex_get_parser_value_per_byte_off(flex, flex->handle, byte_off,
- false, is_inner, &val);
+ is_inner, &val);
if (ret == -1 || !val)
return;
@@ -2314,7 +2314,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
byte_off = base_off - i * sizeof(uint32_t);
ret = mlx5_flex_get_parser_value_per_byte_off(m, v->handle, byte_off,
- true, is_inner, &mask);
+ is_inner, &mask);
if (ret == -1) {
rte_errno = EINVAL;
return rte_errno;
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 61f07f459b..bce1d9e749 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -2493,11 +2493,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
void *key, const struct rte_flow_item *item,
bool is_inner);
int mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
- uint32_t idx, uint32_t *pos,
- bool is_inner, uint32_t *def);
+ uint32_t idx, uint32_t *pos, bool is_inner);
int mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
void *flex, uint32_t byte_off,
- bool is_mask, bool tunnel, uint32_t *value);
+ bool tunnel, uint32_t *value);
int mlx5_flex_get_tunnel_mode(const struct rte_flow_item *item,
enum rte_flow_item_flex_tunnel_mode *tunnel_mode);
int mlx5_flex_acquire_index(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index af6bf7e411..b447b1598a 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -1481,7 +1481,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
const struct mlx5_flex_pattern_field *map;
uint32_t offset = data->offset;
uint32_t width_left = width;
- uint32_t def;
uint32_t cur_width = 0;
uint32_t tmp_ofs;
uint32_t idx = 0;
@@ -1506,7 +1505,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
tmp_ofs = pos < data->offset ? data->offset - pos : 0;
for (j = i; i < flex->mapnum && width_left > 0; ) {
map = flex->map + i;
- id = mlx5_flex_get_sample_id(flex, i, &pos, false, &def);
+ id = mlx5_flex_get_sample_id(flex, i, &pos, false);
if (id == -1) {
i++;
/* All left length is dummy */
@@ -1525,7 +1524,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
* 2. Width has been covered.
*/
for (j = i + 1; j < flex->mapnum; j++) {
- tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false, &def);
+ tmp_id = mlx5_flex_get_sample_id(flex, j, &pos, false);
if (tmp_id == -1) {
i = j;
pos -= flex->map[j].width;
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index e7e6358144..c5dd323fa2 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -118,28 +118,32 @@ mlx5_flex_get_bitfield(const struct rte_flow_item_flex *item,
uint32_t pos, uint32_t width, uint32_t shift)
{
const uint8_t *ptr = item->pattern + pos / CHAR_BIT;
- uint32_t val, vbits;
+ uint32_t val, vbits, skip = pos % CHAR_BIT;
/* Proceed the bitfield start byte. */
MLX5_ASSERT(width <= sizeof(uint32_t) * CHAR_BIT && width);
MLX5_ASSERT(width + shift <= sizeof(uint32_t) * CHAR_BIT);
if (item->length <= pos / CHAR_BIT)
return 0;
- val = *ptr++ >> (pos % CHAR_BIT);
+ /* Bits are enumerated in byte in network order: 01234567 */
+ val = *ptr++;
vbits = CHAR_BIT - pos % CHAR_BIT;
- pos = (pos + vbits) / CHAR_BIT;
+ pos = RTE_ALIGN_CEIL(pos, CHAR_BIT) / CHAR_BIT;
vbits = RTE_MIN(vbits, width);
- val &= RTE_BIT32(vbits) - 1;
+ /* Load bytes to cover the field width, checking pattern boundary */
while (vbits < width && pos < item->length) {
uint32_t part = RTE_MIN(width - vbits, (uint32_t)CHAR_BIT);
uint32_t tmp = *ptr++;
- pos++;
- tmp &= RTE_BIT32(part) - 1;
- val |= tmp << vbits;
+ val |= tmp << RTE_ALIGN_CEIL(vbits, CHAR_BIT);
vbits += part;
+ pos++;
}
- return rte_bswap32(val <<= shift);
+ val = rte_cpu_to_be_32(val);
+ val <<= skip;
+ val >>= shift;
+ val &= (RTE_BIT64(width) - 1) << (sizeof(uint32_t) * CHAR_BIT - shift - width);
+ return val;
}
#define SET_FP_MATCH_SAMPLE_ID(x, def, msk, val, sid) \
@@ -211,21 +215,17 @@ mlx5_flex_set_match_sample(void *misc4_m, void *misc4_v,
* Where to search the value and mask.
* @param[in] is_inner
* For inner matching or not.
- * @param[in, def] def
- * Mask generated by mapping shift and width.
*
* @return
* 0 on success, -1 to ignore.
*/
int
mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
- uint32_t idx, uint32_t *pos,
- bool is_inner, uint32_t *def)
+ uint32_t idx, uint32_t *pos, bool is_inner)
{
const struct mlx5_flex_pattern_field *map = tp->map + idx;
uint32_t id = map->reg_id;
- *def = (RTE_BIT64(map->width) - 1) << map->shift;
/* Skip placeholders for DUMMY fields. */
if (id == MLX5_INVALID_SAMPLE_REG_ID) {
*pos += map->width;
@@ -252,8 +252,6 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
* Mlx5 flex item sample mapping handle.
* @param[in] byte_off
* Mlx5 flex item format_select_dw.
- * @param[in] is_mask
- * Spec or mask.
* @param[in] tunnel
* Tunnel mode or not.
* @param[in, def] value
@@ -265,25 +263,23 @@ mlx5_flex_get_sample_id(const struct mlx5_flex_item *tp,
int
mlx5_flex_get_parser_value_per_byte_off(const struct rte_flow_item_flex *item,
void *flex, uint32_t byte_off,
- bool is_mask, bool tunnel, uint32_t *value)
+ bool tunnel, uint32_t *value)
{
struct mlx5_flex_pattern_field *map;
struct mlx5_flex_item *tp = flex;
- uint32_t def, i, pos, val;
+ uint32_t i, pos, val;
int id;
*value = 0;
for (i = 0, pos = 0; i < tp->mapnum && pos < item->length * CHAR_BIT; i++) {
map = tp->map + i;
- id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel, &def);
+ id = mlx5_flex_get_sample_id(tp, i, &pos, tunnel);
if (id == -1)
continue;
if (id >= (int)tp->devx_fp->num_samples || id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
return -1;
if (byte_off == tp->devx_fp->sample_info[id].sample_dw_data * sizeof(uint32_t)) {
val = mlx5_flex_get_bitfield(item, pos, map->width, map->shift);
- if (is_mask)
- val &= RTE_BE32(def);
*value |= val;
}
pos += map->width;
@@ -355,10 +351,10 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
spec = item->spec;
mask = item->mask;
tp = (struct mlx5_flex_item *)spec->handle;
- for (i = 0; i < tp->mapnum; i++) {
+ for (i = 0; i < tp->mapnum && pos < (spec->length * CHAR_BIT); i++) {
struct mlx5_flex_pattern_field *map = tp->map + i;
uint32_t val, msk, def;
- int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner, &def);
+ int id = mlx5_flex_get_sample_id(tp, i, &pos, is_inner);
if (id == -1)
continue;
@@ -366,11 +362,14 @@ mlx5_flex_flow_translate_item(struct rte_eth_dev *dev,
if (id >= (int)tp->devx_fp->num_samples ||
id >= MLX5_GRAPH_NODE_SAMPLE_NUM)
return;
+ def = (uint32_t)(RTE_BIT64(map->width) - 1);
+ def <<= (sizeof(uint32_t) * CHAR_BIT - map->shift - map->width);
val = mlx5_flex_get_bitfield(spec, pos, map->width, map->shift);
- msk = mlx5_flex_get_bitfield(mask, pos, map->width, map->shift);
+ msk = pos < (mask->length * CHAR_BIT) ?
+ mlx5_flex_get_bitfield(mask, pos, map->width, map->shift) : def;
sample_id = tp->devx_fp->sample_ids[id];
mlx5_flex_set_match_sample(misc4_m, misc4_v,
- def, msk & def, val & msk & def,
+ def, msk, val & msk,
sample_id, id);
pos += map->width;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.293928835 +0800
+++ 0111-net-mlx5-fix-non-full-word-sample-fields-in-flex-ite.patch 2024-11-11 14:23:05.342192835 +0800
@@ -1 +1 @@
-From 97e19f0762e5235d6914845a59823d4ea36925bb Mon Sep 17 00:00:00 2001
+From 86088e4684e1c908882cb293d28d3b85226535dc Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 97e19f0762e5235d6914845a59823d4ea36925bb ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index 2dfcc5eba6..10b986d66b 100644
+index 1b8cb18d63..daee2b6eb7 100644
@@ -29 +31 @@
-@@ -574,7 +574,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
+@@ -436,7 +436,7 @@ mlx5dr_definer_flex_parser_set(struct mlx5dr_definer_fc *fc,
@@ -38 +40 @@
-@@ -2825,7 +2825,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
+@@ -2314,7 +2314,7 @@ mlx5dr_definer_conv_item_flex_parser(struct mlx5dr_definer_conv_data *cd,
@@ -48 +50 @@
-index 399923b443..18b4c15a26 100644
+index 61f07f459b..bce1d9e749 100644
@@ -51 +53 @@
-@@ -2602,11 +2602,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
+@@ -2493,11 +2493,10 @@ void mlx5_flex_flow_translate_item(struct rte_eth_dev *dev, void *matcher,
@@ -66 +68 @@
-index 4451b114ae..5f71573a86 100644
+index af6bf7e411..b447b1598a 100644
@@ -69 +71 @@
-@@ -1526,7 +1526,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
+@@ -1481,7 +1481,6 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
@@ -77 +79 @@
-@@ -1551,7 +1550,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
+@@ -1506,7 +1505,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
@@ -86 +88 @@
-@@ -1570,7 +1569,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
+@@ -1525,7 +1524,7 @@ mlx5_modify_flex_item(const struct rte_eth_dev *dev,
@@ -96 +98 @@
-index 0c41b956b0..bf38643a23 100644
+index e7e6358144..c5dd323fa2 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix flex item header length field translation' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (110 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
` (8 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: xuemingl, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ab563e0a3ea36bdd4139714eafd59129619d88d8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ab563e0a3ea36bdd4139714eafd59129619d88d8 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Wed, 18 Sep 2024 16:46:23 +0300
Subject: [PATCH] net/mlx5: fix flex item header length field translation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b04b06f4cb3f3bdd24228f3ca2ec5b3a7b64308d ]
There are hardware imposed limitations on the header length
field description for the mask and shift combinations in the
FIELD_MODE_OFFSET mode.
The patch updates:
- parameter check for FIELD_MODE_OFFSET for the header length
field
- check whether length field crosses dword boundaries in header
- correct mask extension to the hardware required width 6-bits
- correct adjusting the mask left margin offset, preventing
dword offset
Fixes: b293e8e49d78 ("net/mlx5: translate flex item configuration")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_flex.c | 120 ++++++++++++++++--------------
1 file changed, 66 insertions(+), 54 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_flex.c b/drivers/net/mlx5/mlx5_flow_flex.c
index c5dd323fa2..58d8c61443 100644
--- a/drivers/net/mlx5/mlx5_flow_flex.c
+++ b/drivers/net/mlx5/mlx5_flow_flex.c
@@ -449,12 +449,14 @@ mlx5_flex_release_index(struct rte_eth_dev *dev,
*
* shift mask
* ------- ---------------
- * 0 b111100 0x3C
- * 1 b111110 0x3E
- * 2 b111111 0x3F
- * 3 b011111 0x1F
- * 4 b001111 0x0F
- * 5 b000111 0x07
+ * 0 b11111100 0x3C
+ * 1 b01111110 0x3E
+ * 2 b00111111 0x3F
+ * 3 b00011111 0x1F
+ * 4 b00001111 0x0F
+ * 5 b00000111 0x07
+ * 6 b00000011 0x03
+ * 7 b00000001 0x01
*/
static uint8_t
mlx5_flex_hdr_len_mask(uint8_t shift,
@@ -464,8 +466,7 @@ mlx5_flex_hdr_len_mask(uint8_t shift,
int diff = shift - MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
base_mask = mlx5_hca_parse_graph_node_base_hdr_len_mask(attr);
- return diff == 0 ? base_mask :
- diff < 0 ? (base_mask << -diff) & base_mask : base_mask >> diff;
+ return diff < 0 ? base_mask << -diff : base_mask >> diff;
}
static int
@@ -476,7 +477,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
{
const struct rte_flow_item_flex_field *field = &conf->next_header;
struct mlx5_devx_graph_node_attr *node = &devx->devx_conf;
- uint32_t len_width, mask;
if (field->field_base % CHAR_BIT)
return rte_flow_error_set
@@ -504,7 +504,14 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
"negative header length field base (FIXED)");
node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
break;
- case FIELD_MODE_OFFSET:
+ case FIELD_MODE_OFFSET: {
+ uint32_t msb, lsb;
+ int32_t shift = field->offset_shift;
+ uint32_t offset = field->offset_base;
+ uint32_t mask = field->offset_mask;
+ uint32_t wmax = attr->header_length_mask_width +
+ MLX5_PARSE_GRAPH_NODE_HDR_LEN_SHIFT_DWORD;
+
if (!(attr->header_length_mode &
RTE_BIT32(MLX5_GRAPH_NODE_LEN_FIELD)))
return rte_flow_error_set
@@ -514,47 +521,73 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
"field size is a must for offset mode");
- if (field->field_size + field->offset_base < attr->header_length_mask_width)
+ if ((offset ^ (field->field_size + offset)) >> 5)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "field size plus offset_base is too small");
- node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
- if (field->offset_mask == 0 ||
- !rte_is_power_of_2(field->offset_mask + 1))
+ "field crosses the 32-bit word boundary");
+ /* Hardware counts in dwords, all shifts done by offset within mask */
+ if (shift < 0 || (uint32_t)shift >= wmax)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "header length field shift exceeds limits (OFFSET)");
+ if (!mask)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "zero length field offset mask (OFFSET)");
+ msb = rte_fls_u32(mask) - 1;
+ lsb = rte_bsf32(mask);
+ if (!rte_is_power_of_2((mask >> lsb) + 1))
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "invalid length field offset mask (OFFSET)");
- len_width = rte_fls_u32(field->offset_mask);
- if (len_width > attr->header_length_mask_width)
+ "length field offset mask not contiguous (OFFSET)");
+ if (msb >= field->field_size)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "length field offset mask too wide (OFFSET)");
- mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
- if (mask < field->offset_mask)
+ "length field offset mask exceeds field size (OFFSET)");
+ if (msb >= wmax)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "length field shift too big (OFFSET)");
- node->header_length_field_mask = RTE_MIN(mask,
- field->offset_mask);
+ "length field offset mask exceeds supported width (OFFSET)");
+ if (mask & ~mlx5_flex_hdr_len_mask(shift, attr))
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "mask and shift combination not supported (OFFSET)");
+ msb++;
+ offset += field->field_size - msb;
+ if (msb < attr->header_length_mask_width) {
+ if (attr->header_length_mask_width - msb > offset)
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
+ "field size plus offset_base is too small");
+ offset += msb;
+ /*
+ * Here we can move to preceding dword. Hardware does
+ * cyclic left shift so we should avoid this and stay
+ * at current dword offset.
+ */
+ offset = (offset & ~0x1Fu) |
+ ((offset - attr->header_length_mask_width) & 0x1F);
+ }
+ node->header_length_mode = MLX5_GRAPH_NODE_LEN_FIELD;
+ node->header_length_field_mask = mask;
+ node->header_length_field_shift = shift;
+ node->header_length_field_offset = offset;
break;
+ }
case FIELD_MODE_BITMASK:
if (!(attr->header_length_mode &
RTE_BIT32(MLX5_GRAPH_NODE_LEN_BITMASK)))
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
"unsupported header length field mode (BITMASK)");
- if (attr->header_length_mask_width < field->field_size)
+ if (field->offset_shift > 15 || field->offset_shift < 0)
return rte_flow_error_set
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "header length field width exceeds limit");
+ "header length field shift exceeds limit (BITMASK)");
node->header_length_mode = MLX5_GRAPH_NODE_LEN_BITMASK;
- mask = mlx5_flex_hdr_len_mask(field->offset_shift, attr);
- if (mask < field->offset_mask)
- return rte_flow_error_set
- (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "length field shift too big (BITMASK)");
- node->header_length_field_mask = RTE_MIN(mask,
- field->offset_mask);
+ node->header_length_field_mask = field->offset_mask;
+ node->header_length_field_shift = field->offset_shift;
+ node->header_length_field_offset = field->offset_base;
break;
default:
return rte_flow_error_set
@@ -567,27 +600,6 @@ mlx5_flex_translate_length(struct mlx5_hca_flex_attr *attr,
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
"header length field base exceeds limit");
node->header_length_base_value = field->field_base / CHAR_BIT;
- if (field->field_mode == FIELD_MODE_OFFSET ||
- field->field_mode == FIELD_MODE_BITMASK) {
- if (field->offset_shift > 15 || field->offset_shift < 0)
- return rte_flow_error_set
- (error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, NULL,
- "header length field shift exceeds limit");
- node->header_length_field_shift = field->offset_shift;
- node->header_length_field_offset = field->offset_base;
- }
- if (field->field_mode == FIELD_MODE_OFFSET) {
- if (field->field_size > attr->header_length_mask_width) {
- node->header_length_field_offset +=
- field->field_size - attr->header_length_mask_width;
- } else if (field->field_size < attr->header_length_mask_width) {
- node->header_length_field_offset -=
- attr->header_length_mask_width - field->field_size;
- node->header_length_field_mask =
- RTE_MIN(node->header_length_field_mask,
- (1u << field->field_size) - 1);
- }
- }
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.363397633 +0800
+++ 0112-net-mlx5-fix-flex-item-header-length-field-translati.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From b04b06f4cb3f3bdd24228f3ca2ec5b3a7b64308d Mon Sep 17 00:00:00 2001
+From ab563e0a3ea36bdd4139714eafd59129619d88d8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b04b06f4cb3f3bdd24228f3ca2ec5b3a7b64308d ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index bf38643a23..afed16985a 100644
+index c5dd323fa2..58d8c61443 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'build: remove version check on compiler links function' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (111 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
` (7 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Bruce Richardson
Cc: xuemingl, Robin Jarry, Ferruh Yigit, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b2d1e9cfcd735986ffef0ec88d604971a8172708
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b2d1e9cfcd735986ffef0ec88d604971a8172708 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Fri, 20 Sep 2024 13:57:34 +0100
Subject: [PATCH] build: remove version check on compiler links function
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2909f9afbfd1b54ace204d40d57b68e6058aca28 ]
The "compiler.links()" function meson documentation [1] is a little
unclear, in a casual reading implies that the function was new in 0.60
meson release. In fact, it is only enhanced as described in that
release, but is present earlier.
As such, we can remove the version checks preceding the calls to links
function in our code.
[1] https://mesonbuild.com/Reference-manual_returned_compiler.html#compilerlinks
Fixes: fd809737cf8c ("common/qat: fix build with incompatible IPsec library")
Fixes: fb94d8243894 ("crypto/ipsec_mb: add dependency check for cross build")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Reviewed-by: Robin Jarry <rjarry@redhat.com>
Tested-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
drivers/common/qat/meson.build | 2 +-
drivers/crypto/ipsec_mb/meson.build | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/drivers/common/qat/meson.build b/drivers/common/qat/meson.build
index 62abcb6fe3..3d28bd2af5 100644
--- a/drivers/common/qat/meson.build
+++ b/drivers/common/qat/meson.build
@@ -43,7 +43,7 @@ else
IMB_required_ver = '1.4.0'
IMB_header = '#include<intel-ipsec-mb.h>'
libipsecmb = cc.find_library('IPSec_MB', required: false)
- if libipsecmb.found() and meson.version().version_compare('>=0.60') and cc.links(
+ if libipsecmb.found() and cc.links(
'int main(void) {return 0;}', dependencies: libipsecmb)
# version comes with quotes, so we split based on " and take the middle
imb_ver = cc.get_define('IMB_VERSION_STR',
diff --git a/drivers/crypto/ipsec_mb/meson.build b/drivers/crypto/ipsec_mb/meson.build
index 87bf965554..81631d3050 100644
--- a/drivers/crypto/ipsec_mb/meson.build
+++ b/drivers/crypto/ipsec_mb/meson.build
@@ -17,7 +17,7 @@ if not lib.found()
build = false
reason = 'missing dependency, "libIPSec_MB"'
# if the lib is found, check it's the right format
-elif meson.version().version_compare('>=0.60') and not cc.links(
+elif not cc.links(
'int main(void) {return 0;}', dependencies: lib)
build = false
reason = 'incompatible dependency, "libIPSec_MB"'
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.396636933 +0800
+++ 0113-build-remove-version-check-on-compiler-links-functio.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From 2909f9afbfd1b54ace204d40d57b68e6058aca28 Mon Sep 17 00:00:00 2001
+From b2d1e9cfcd735986ffef0ec88d604971a8172708 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2909f9afbfd1b54ace204d40d57b68e6058aca28 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -29 +31 @@
-index 3893b127dd..5a8de16fe0 100644
+index 62abcb6fe3..3d28bd2af5 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'hash: fix thash LFSR initialization' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (112 preceding siblings ...)
2024-11-11 6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
` (6 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c6474cb613a54d7d2c073080c183ce70c9634ecd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c6474cb613a54d7d2c073080c183ce70c9634ecd Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Fri, 6 Sep 2024 17:01:41 +0000
Subject: [PATCH] hash: fix thash LFSR initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ebf7f1188ea83d6154746e90d535392113ecb1e8 ]
Reverse polynomial for an LFSR was initialized improperly which
could generate improper bit sequence in some situations.
This patch implements proper polynomial reversing function.
Fixes: 28ebff11c2dc ("hash: add predictable RSS")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
lib/hash/rte_thash.c | 26 +++++++++++++++++++++++++-
1 file changed, 25 insertions(+), 1 deletion(-)
diff --git a/lib/hash/rte_thash.c b/lib/hash/rte_thash.c
index 4ff567ee5a..a952006686 100644
--- a/lib/hash/rte_thash.c
+++ b/lib/hash/rte_thash.c
@@ -160,6 +160,30 @@ thash_get_rand_poly(uint32_t poly_degree)
RTE_DIM(irreducible_poly_table[poly_degree])];
}
+static inline uint32_t
+get_rev_poly(uint32_t poly, int degree)
+{
+ int i;
+ /*
+ * The implicit highest coefficient of the polynomial
+ * becomes the lowest after reversal.
+ */
+ uint32_t rev_poly = 1;
+ uint32_t mask = (1 << degree) - 1;
+
+ /*
+ * Here we assume "poly" argument is an irreducible polynomial,
+ * thus the lowest coefficient of the "poly" must always be equal to "1".
+ * After the reversal, this the lowest coefficient becomes the highest and
+ * it is omitted since the highest coefficient is implicitly determined by
+ * degree of the polynomial.
+ */
+ for (i = 1; i < degree; i++)
+ rev_poly |= ((poly >> i) & 0x1) << (degree - i);
+
+ return rev_poly & mask;
+}
+
static struct thash_lfsr *
alloc_lfsr(struct rte_thash_ctx *ctx)
{
@@ -179,7 +203,7 @@ alloc_lfsr(struct rte_thash_ctx *ctx)
lfsr->state = rte_rand() & ((1 << lfsr->deg) - 1);
} while (lfsr->state == 0);
/* init reverse order polynomial */
- lfsr->rev_poly = (lfsr->poly >> 1) | (1 << (lfsr->deg - 1));
+ lfsr->rev_poly = get_rev_poly(lfsr->poly, lfsr->deg);
/* init proper rev_state*/
lfsr->rev_state = lfsr->state;
for (i = 0; i <= lfsr->deg; i++)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.434207333 +0800
+++ 0114-hash-fix-thash-LFSR-initialization.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From ebf7f1188ea83d6154746e90d535392113ecb1e8 Mon Sep 17 00:00:00 2001
+From c6474cb613a54d7d2c073080c183ce70c9634ecd Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ebf7f1188ea83d6154746e90d535392113ecb1e8 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 10721effe6..99a685f0c8 100644
+index 4ff567ee5a..a952006686 100644
@@ -22 +24 @@
-@@ -166,6 +166,30 @@ thash_get_rand_poly(uint32_t poly_degree)
+@@ -160,6 +160,30 @@ thash_get_rand_poly(uint32_t poly_degree)
@@ -53 +55 @@
-@@ -185,7 +209,7 @@ alloc_lfsr(struct rte_thash_ctx *ctx)
+@@ -179,7 +203,7 @@ alloc_lfsr(struct rte_thash_ctx *ctx)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: notify flower firmware about PF speed' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (113 preceding siblings ...)
2024-11-11 6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
` (5 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Zerun Fu; +Cc: xuemingl, Chaoyong He, Long Wu, Peng Zhang, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ebec3137a58cbd31c0cf8483586527b1d48ae779
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ebec3137a58cbd31c0cf8483586527b1d48ae779 Mon Sep 17 00:00:00 2001
From: Zerun Fu <zerun.fu@corigine.com>
Date: Mon, 14 Oct 2024 10:43:55 +0800
Subject: [PATCH] net/nfp: notify flower firmware about PF speed
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 2254813795099aa6c05caed5e8c0dcc7a8f03b4e ]
When using flower firmware, the VF speed is obtained from the
firmware and the firmware get the VF speed from the PF.
But the previous logic does not notify the firmware about PF speed,
and this cause VF speed to be unavailable.
Fix this by add the logic of notify firmware about PF speed.
Fixes: e1124c4f8a45 ("net/nfp: add flower representor framework")
Signed-off-by: Zerun Fu <zerun.fu@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
---
drivers/net/nfp/flower/nfp_flower_representor.c | 3 +++
drivers/net/nfp/nfp_net_common.c | 2 +-
drivers/net/nfp/nfp_net_common.h | 2 ++
3 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c
index 23709acbba..ada28d07c6 100644
--- a/drivers/net/nfp/flower/nfp_flower_representor.c
+++ b/drivers/net/nfp/flower/nfp_flower_representor.c
@@ -33,6 +33,9 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
pf_hw = repr->app_fw_flower->pf_hw;
ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
+ if (repr->repr_type == NFP_REPR_TYPE_PF)
+ nfp_net_notify_port_speed(repr->app_fw_flower->pf_hw, link);
+
return ret;
}
diff --git a/drivers/net/nfp/nfp_net_common.c b/drivers/net/nfp/nfp_net_common.c
index 134a9b807e..bf44373b26 100644
--- a/drivers/net/nfp/nfp_net_common.c
+++ b/drivers/net/nfp/nfp_net_common.c
@@ -166,7 +166,7 @@ nfp_net_link_speed_rte2nfp(uint32_t speed)
return NFP_NET_CFG_STS_LINK_RATE_UNKNOWN;
}
-static void
+void
nfp_net_notify_port_speed(struct nfp_net_hw *hw,
struct rte_eth_link *link)
{
diff --git a/drivers/net/nfp/nfp_net_common.h b/drivers/net/nfp/nfp_net_common.h
index 41d59bfa99..72286ab5c9 100644
--- a/drivers/net/nfp/nfp_net_common.h
+++ b/drivers/net/nfp/nfp_net_common.h
@@ -282,6 +282,8 @@ int nfp_net_flow_ctrl_set(struct rte_eth_dev *dev,
void nfp_pf_uninit(struct nfp_pf_dev *pf_dev);
uint32_t nfp_net_get_port_num(struct nfp_pf_dev *pf_dev,
struct nfp_eth_table *nfp_eth_table);
+void nfp_net_notify_port_speed(struct nfp_net_hw *hw,
+ struct rte_eth_link *link);
#define NFP_PRIV_TO_APP_FW_NIC(app_fw_priv)\
((struct nfp_app_fw_nic *)app_fw_priv)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.471975632 +0800
+++ 0115-net-nfp-notify-flower-firmware-about-PF-speed.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From 2254813795099aa6c05caed5e8c0dcc7a8f03b4e Mon Sep 17 00:00:00 2001
+From ebec3137a58cbd31c0cf8483586527b1d48ae779 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 2254813795099aa6c05caed5e8c0dcc7a8f03b4e ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index eb0a02874b..eae6ba39e1 100644
+index 23709acbba..ada28d07c6 100644
@@ -31,3 +33,3 @@
-@@ -37,6 +37,9 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
-
- ret = nfp_net_link_update_common(dev, link, link->link_status);
+@@ -33,6 +33,9 @@ nfp_flower_repr_link_update(struct rte_eth_dev *dev,
+ pf_hw = repr->app_fw_flower->pf_hw;
+ ret = nfp_net_link_update_common(dev, pf_hw, link, link->link_status);
@@ -42 +44 @@
-index b986ed4622..f76d5a6895 100644
+index 134a9b807e..bf44373b26 100644
@@ -45,2 +47,2 @@
-@@ -184,7 +184,7 @@ nfp_net_link_speed_nfp2rte_check(uint32_t speed)
- return RTE_ETH_SPEED_NUM_NONE;
+@@ -166,7 +166,7 @@ nfp_net_link_speed_rte2nfp(uint32_t speed)
+ return NFP_NET_CFG_STS_LINK_RATE_UNKNOWN;
@@ -55 +57 @@
-index 8429db68f0..d4fe8338b9 100644
+index 41d59bfa99..72286ab5c9 100644
@@ -58,4 +60,4 @@
-@@ -383,6 +383,8 @@ int nfp_net_vf_config_app_init(struct nfp_net_hw *net_hw,
- bool nfp_net_version_check(struct nfp_hw *hw,
- struct nfp_pf_dev *pf_dev);
- void nfp_net_ctrl_bar_size_set(struct nfp_pf_dev *pf_dev);
+@@ -282,6 +282,8 @@ int nfp_net_flow_ctrl_set(struct rte_eth_dev *dev,
+ void nfp_pf_uninit(struct nfp_pf_dev *pf_dev);
+ uint32_t nfp_net_get_port_num(struct nfp_pf_dev *pf_dev,
+ struct nfp_eth_table *nfp_eth_table);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/nfp: do not set IPv6 flag in transport mode' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (114 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
` (4 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Shihong Wang; +Cc: xuemingl, Long Wu, Peng Zhang, Chaoyong He, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0b75611d8ab73a141d4141de7ba56b927eb05414
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0b75611d8ab73a141d4141de7ba56b927eb05414 Mon Sep 17 00:00:00 2001
From: Shihong Wang <shihong.wang@corigine.com>
Date: Mon, 14 Oct 2024 10:43:56 +0800
Subject: [PATCH] net/nfp: do not set IPv6 flag in transport mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4f64ebdd41ce8bb60dba95589a5cc684fb9cb89c ]
The transport only encapsulates the security protocol header,
does not pay attention to the IP protocol type, and need not
to set the IPv6 flag.
Fixes: 3d21da66c06b ("net/nfp: create security session")
Signed-off-by: Shihong Wang <shihong.wang@corigine.com>
Reviewed-by: Long Wu <long.wu@corigine.com>
Reviewed-by: Peng Zhang <peng.zhang@corigine.com>
Reviewed-by: Chaoyong He <chaoyong.he@corigine.com>
---
drivers/net/nfp/nfp_ipsec.c | 15 ++-------------
1 file changed, 2 insertions(+), 13 deletions(-)
diff --git a/drivers/net/nfp/nfp_ipsec.c b/drivers/net/nfp/nfp_ipsec.c
index b10cda570b..56f3777226 100644
--- a/drivers/net/nfp/nfp_ipsec.c
+++ b/drivers/net/nfp/nfp_ipsec.c
@@ -1053,20 +1053,9 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
break;
case RTE_SECURITY_IPSEC_SA_MODE_TRANSPORT:
- type = conf->ipsec.tunnel.type;
cfg->ctrl_word.mode = NFP_IPSEC_MODE_TRANSPORT;
- if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV4) {
- memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
- memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
- cfg->ipv6 = 0;
- } else if (type == RTE_SECURITY_IPSEC_TUNNEL_IPV6) {
- memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
- memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
- cfg->ipv6 = 1;
- } else {
- PMD_DRV_LOG(ERR, "Unsupported address family!");
- return -EINVAL;
- }
+ memset(&cfg->src_ip, 0, sizeof(cfg->src_ip));
+ memset(&cfg->dst_ip, 0, sizeof(cfg->dst_ip));
break;
default:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.506232932 +0800
+++ 0116-net-nfp-do-not-set-IPv6-flag-in-transport-mode.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From 4f64ebdd41ce8bb60dba95589a5cc684fb9cb89c Mon Sep 17 00:00:00 2001
+From 0b75611d8ab73a141d4141de7ba56b927eb05414 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4f64ebdd41ce8bb60dba95589a5cc684fb9cb89c ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 647bc2bb6d..89116af1b2 100644
+index b10cda570b..56f3777226 100644
@@ -25 +27 @@
-@@ -1056,20 +1056,9 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
+@@ -1053,20 +1053,9 @@ nfp_ipsec_msg_build(struct rte_eth_dev *eth_dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'dmadev: fix potential null pointer access' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (115 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
` (3 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Chengwen Feng; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=43802f5b3729302977db17e78dbeac7ac3a09353
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 43802f5b3729302977db17e78dbeac7ac3a09353 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Sat, 12 Oct 2024 17:17:34 +0800
Subject: [PATCH] dmadev: fix potential null pointer access
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e5389d427ec43ab805d0a1caed89b63656fd7fde ]
When rte_dma_vchan_status(dev_id, vchan, NULL) is called, a null pointer
access is triggered.
This patch adds the null pointer checker.
Fixes: 5e0f85912754 ("dmadev: add channel status check for testing use")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
---
lib/dmadev/rte_dmadev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/dmadev/rte_dmadev.c b/lib/dmadev/rte_dmadev.c
index 5093c6e38b..a2e52cc8ff 100644
--- a/lib/dmadev/rte_dmadev.c
+++ b/lib/dmadev/rte_dmadev.c
@@ -731,7 +731,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *
{
struct rte_dma_dev *dev = &rte_dma_devices[dev_id];
- if (!rte_dma_is_valid(dev_id))
+ if (!rte_dma_is_valid(dev_id) || status == NULL)
return -EINVAL;
if (vchan >= dev->data->dev_conf.nb_vchans) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.548093632 +0800
+++ 0117-dmadev-fix-potential-null-pointer-access.patch 2024-11-11 14:23:05.352192835 +0800
@@ -1 +1 @@
-From e5389d427ec43ab805d0a1caed89b63656fd7fde Mon Sep 17 00:00:00 2001
+From 43802f5b3729302977db17e78dbeac7ac3a09353 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e5389d427ec43ab805d0a1caed89b63656fd7fde ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 845727210f..60c3d8ebf6 100644
+index 5093c6e38b..a2e52cc8ff 100644
@@ -22 +24 @@
-@@ -741,7 +741,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *
+@@ -731,7 +731,7 @@ rte_dma_vchan_status(int16_t dev_id, uint16_t vchan, enum rte_dma_vchan_status *
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/gve/base: fix build with Fedora Rawhide' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (116 preceding siblings ...)
2024-11-11 6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
` (2 subsequent siblings)
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Joshua Washington; +Cc: xuemingl, David Marchand, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6f5a555e28d8578ac2ce1cecf373d5f4c02fcd2a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6f5a555e28d8578ac2ce1cecf373d5f4c02fcd2a Mon Sep 17 00:00:00 2001
From: Joshua Washington <joshwash@google.com>
Date: Thu, 17 Oct 2024 16:42:33 -0700
Subject: [PATCH] net/gve/base: fix build with Fedora Rawhide
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f0d9e787747dda0715654da9f0501f54fe105868 ]
Currently, a number of integer types are typedef'd to their corresponding
userspace or RTE values. This can be problematic if these types are
already defined somewhere else, as it would cause type collisions.
This patch changes the typedefs to #define macros which are only defined
if the types are not defined already.
Note: this was reported by OBS CI on 2024/10/17, when compiling DPDK
in Fedora Rawhide.
Fixes: c9ba2caf6302 ("net/gve/base: add OS-specific implementation")
Fixes: abf1242fbb84 ("net/gve: add struct members and typedefs for DQO")
Reported-by: David Marchand <david.marchand@redhat.com>
Signed-off-by: Joshua Washington <joshwash@google.com>
---
drivers/net/gve/base/gve_osdep.h | 48 ++++++++++++++++++++++++--------
1 file changed, 36 insertions(+), 12 deletions(-)
diff --git a/drivers/net/gve/base/gve_osdep.h b/drivers/net/gve/base/gve_osdep.h
index a3702f4b8c..a6eb52306f 100644
--- a/drivers/net/gve/base/gve_osdep.h
+++ b/drivers/net/gve/base/gve_osdep.h
@@ -29,22 +29,46 @@
#include <sys/utsname.h>
#endif
-typedef uint8_t u8;
-typedef uint16_t u16;
-typedef uint32_t u32;
-typedef uint64_t u64;
+#ifndef u8
+#define u8 uint8_t
+#endif
+#ifndef u16
+#define u16 uint16_t
+#endif
+#ifndef u32
+#define u32 uint32_t
+#endif
+#ifndef u64
+#define u64 uint64_t
+#endif
-typedef rte_be16_t __sum16;
+#ifndef __sum16
+#define __sum16 rte_be16_t
+#endif
-typedef rte_be16_t __be16;
-typedef rte_be32_t __be32;
-typedef rte_be64_t __be64;
+#ifndef __be16
+#define __be16 rte_be16_t
+#endif
+#ifndef __be32
+#define __be32 rte_be32_t
+#endif
+#ifndef __be64
+#define __be64 rte_be64_t
+#endif
-typedef rte_le16_t __le16;
-typedef rte_le32_t __le32;
-typedef rte_le64_t __le64;
+#ifndef __le16
+#define __le16 rte_le16_t
+#endif
+#ifndef __le32
+#define __le32 rte_le32_t
+#endif
+#ifndef __le64
+#define __le64 rte_le64_t
+#endif
-typedef rte_iova_t dma_addr_t;
+#ifndef dma_addr_t
+#define dma_addr_t rte_iova_t
+#endif
#define ETH_MIN_MTU RTE_ETHER_MIN_MTU
#define ETH_ALEN RTE_ETHER_ADDR_LEN
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.581961132 +0800
+++ 0118-net-gve-base-fix-build-with-Fedora-Rawhide.patch 2024-11-11 14:23:05.362192835 +0800
@@ -1 +1 @@
-From f0d9e787747dda0715654da9f0501f54fe105868 Mon Sep 17 00:00:00 2001
+From 6f5a555e28d8578ac2ce1cecf373d5f4c02fcd2a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f0d9e787747dda0715654da9f0501f54fe105868 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index c0ee0d567c..64181cebd6 100644
+index a3702f4b8c..a6eb52306f 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'power: fix mapped lcore ID' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (117 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
2024-11-11 6:28 ` patch '' " Xueming Li
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Sivaprasad Tummala; +Cc: xuemingl, Konstantin Ananyev, Huisong Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ff3018e2ffc9660ddd9770657dbf491442a34c7a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ff3018e2ffc9660ddd9770657dbf491442a34c7a Mon Sep 17 00:00:00 2001
From: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
Date: Fri, 18 Oct 2024 03:34:34 +0000
Subject: [PATCH] power: fix mapped lcore ID
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5c9b07eeba55d527025f1f4945e2dbb366f21215 ]
This commit fixes an issue in the power library
related to using lcores mapped to different
physical cores (--lcores option in EAL).
Previously, the power library incorrectly accessed
CPU sysfs attributes for power management, treating
lcore IDs as CPU IDs.
e.g. with --lcores '1@128', lcore_id '1' was interpreted
as CPU_id instead of '128'.
This patch corrects the cpu_id based on lcore and CPU
mappings. It also constraints power management support
for lcores mapped to multiple physical cores/threads.
When multiple lcores are mapped to the same physical core,
invoking frequency scaling APIs on any lcore will apply the
changes effectively.
Fixes: 53e54bf81700 ("eal: new option --lcores for cpu assignment")
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
---
app/test/test_power_cpufreq.c | 21 ++++++++++++++++++---
lib/power/power_acpi_cpufreq.c | 6 +++++-
lib/power/power_amd_pstate_cpufreq.c | 6 +++++-
lib/power/power_common.c | 22 ++++++++++++++++++++++
lib/power/power_common.h | 1 +
lib/power/power_cppc_cpufreq.c | 6 +++++-
lib/power/power_pstate_cpufreq.c | 6 +++++-
7 files changed, 61 insertions(+), 7 deletions(-)
diff --git a/app/test/test_power_cpufreq.c b/app/test/test_power_cpufreq.c
index 619b2811c6..edbd34424e 100644
--- a/app/test/test_power_cpufreq.c
+++ b/app/test/test_power_cpufreq.c
@@ -9,6 +9,7 @@
#include <string.h>
#include <inttypes.h>
#include <rte_cycles.h>
+#include <rte_lcore.h>
#include "test.h"
@@ -46,9 +47,10 @@ test_power_caps(void)
static uint32_t total_freq_num;
static uint32_t freqs[TEST_POWER_FREQS_NUM_MAX];
+static uint32_t cpu_id;
static int
-check_cur_freq(unsigned int lcore_id, uint32_t idx, bool turbo)
+check_cur_freq(__rte_unused unsigned int lcore_id, uint32_t idx, bool turbo)
{
#define TEST_POWER_CONVERT_TO_DECIMAL 10
#define MAX_LOOP 100
@@ -62,13 +64,13 @@ check_cur_freq(unsigned int lcore_id, uint32_t idx, bool turbo)
int i;
if (snprintf(fullpath, sizeof(fullpath),
- TEST_POWER_SYSFILE_CPUINFO_FREQ, lcore_id) < 0) {
+ TEST_POWER_SYSFILE_CPUINFO_FREQ, cpu_id) < 0) {
return 0;
}
f = fopen(fullpath, "r");
if (f == NULL) {
if (snprintf(fullpath, sizeof(fullpath),
- TEST_POWER_SYSFILE_SCALING_FREQ, lcore_id) < 0) {
+ TEST_POWER_SYSFILE_SCALING_FREQ, cpu_id) < 0) {
return 0;
}
f = fopen(fullpath, "r");
@@ -497,6 +499,19 @@ test_power_cpufreq(void)
{
int ret = -1;
enum power_management_env env;
+ rte_cpuset_t lcore_cpus;
+
+ lcore_cpus = rte_lcore_cpuset(TEST_POWER_LCORE_ID);
+ if (CPU_COUNT(&lcore_cpus) != 1) {
+ printf("Power management doesn't support lcore %u mapping to %u CPUs\n",
+ TEST_POWER_LCORE_ID,
+ CPU_COUNT(&lcore_cpus));
+ return TEST_SKIPPED;
+ }
+ for (cpu_id = 0; cpu_id < CPU_SETSIZE; cpu_id++) {
+ if (CPU_ISSET(cpu_id, &lcore_cpus))
+ break;
+ }
/* Test initialisation of a valid lcore */
ret = rte_power_init(TEST_POWER_LCORE_ID);
diff --git a/lib/power/power_acpi_cpufreq.c b/lib/power/power_acpi_cpufreq.c
index 8b55f19247..d860a12a8c 100644
--- a/lib/power/power_acpi_cpufreq.c
+++ b/lib/power/power_acpi_cpufreq.c
@@ -258,7 +258,11 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_amd_pstate_cpufreq.c b/lib/power/power_amd_pstate_cpufreq.c
index dbd9d2b3ee..7b8e77003f 100644
--- a/lib/power/power_amd_pstate_cpufreq.c
+++ b/lib/power/power_amd_pstate_cpufreq.c
@@ -376,7 +376,11 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_common.c b/lib/power/power_common.c
index 1e09facb86..8ffb49ef8f 100644
--- a/lib/power/power_common.c
+++ b/lib/power/power_common.c
@@ -9,6 +9,7 @@
#include <rte_log.h>
#include <rte_string_fns.h>
+#include <rte_lcore.h>
#include "power_common.h"
@@ -202,3 +203,24 @@ out:
return ret;
}
+
+int power_get_lcore_mapped_cpu_id(uint32_t lcore_id, uint32_t *cpu_id)
+{
+ rte_cpuset_t lcore_cpus;
+ uint32_t cpu;
+
+ lcore_cpus = rte_lcore_cpuset(lcore_id);
+ if (CPU_COUNT(&lcore_cpus) != 1) {
+ RTE_LOG(ERR, POWER, "Power library does not support lcore %u mapping to %u CPUs", lcore_id,
+ CPU_COUNT(&lcore_cpus));
+ return -1;
+ }
+
+ for (cpu = 0; cpu < CPU_SETSIZE; cpu++) {
+ if (CPU_ISSET(cpu, &lcore_cpus))
+ break;
+ }
+ *cpu_id = cpu;
+
+ return 0;
+}
diff --git a/lib/power/power_common.h b/lib/power/power_common.h
index c1c7139276..b928df941f 100644
--- a/lib/power/power_common.h
+++ b/lib/power/power_common.h
@@ -27,5 +27,6 @@ int open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
int read_core_sysfs_u32(FILE *f, uint32_t *val);
int read_core_sysfs_s(FILE *f, char *buf, unsigned int len);
int write_core_sysfs_s(FILE *f, const char *str);
+int power_get_lcore_mapped_cpu_id(uint32_t lcore_id, uint32_t *cpu_id);
#endif /* _POWER_COMMON_H_ */
diff --git a/lib/power/power_cppc_cpufreq.c b/lib/power/power_cppc_cpufreq.c
index f2ba684c83..add477c804 100644
--- a/lib/power/power_cppc_cpufreq.c
+++ b/lib/power/power_cppc_cpufreq.c
@@ -362,7 +362,11 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_userspace(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
diff --git a/lib/power/power_pstate_cpufreq.c b/lib/power/power_pstate_cpufreq.c
index 5ca5f60bcd..890875bd93 100644
--- a/lib/power/power_pstate_cpufreq.c
+++ b/lib/power/power_pstate_cpufreq.c
@@ -564,7 +564,11 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
return -1;
}
- pi->lcore_id = lcore_id;
+ if (power_get_lcore_mapped_cpu_id(lcore_id, &pi->lcore_id) < 0) {
+ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
+ return -1;
+ }
+
/* Check and set the governor */
if (power_set_governor_performance(pi) < 0) {
RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-11-11 14:23:10.615056131 +0800
+++ 0119-power-fix-mapped-lcore-ID.patch 2024-11-11 14:23:05.362192835 +0800
@@ -1 +1 @@
-From 5c9b07eeba55d527025f1f4945e2dbb366f21215 Mon Sep 17 00:00:00 2001
+From ff3018e2ffc9660ddd9770657dbf491442a34c7a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5c9b07eeba55d527025f1f4945e2dbb366f21215 ]
@@ -25 +27,0 @@
-Cc: stable@dpdk.org
@@ -34 +36 @@
- lib/power/power_common.c | 23 +++++++++++++++++++++++
+ lib/power/power_common.c | 22 ++++++++++++++++++++++
@@ -38 +40 @@
- 7 files changed, 62 insertions(+), 7 deletions(-)
+ 7 files changed, 61 insertions(+), 7 deletions(-)
@@ -101 +103 @@
-index abad53bef1..ae809fbb60 100644
+index 8b55f19247..d860a12a8c 100644
@@ -104 +106 @@
-@@ -264,7 +264,11 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
+@@ -258,7 +258,11 @@ power_acpi_cpufreq_init(unsigned int lcore_id)
@@ -110 +112 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
@@ -116 +118 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
@@ -118 +120 @@
-index 4809d45a22..2b728eca18 100644
+index dbd9d2b3ee..7b8e77003f 100644
@@ -121 +123 @@
-@@ -382,7 +382,11 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
+@@ -376,7 +376,11 @@ power_amd_pstate_cpufreq_init(unsigned int lcore_id)
@@ -127 +129 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
@@ -133 +135 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
@@ -135 +137 @@
-index 590986d5ef..b47c63a5f1 100644
+index 1e09facb86..8ffb49ef8f 100644
@@ -146 +148 @@
-@@ -204,3 +205,25 @@ out:
+@@ -202,3 +203,24 @@ out:
@@ -158,3 +160,2 @@
-+ POWER_LOG(ERR,
-+ "Power library does not support lcore %u mapping to %u CPUs",
-+ lcore_id, CPU_COUNT(&lcore_cpus));
++ RTE_LOG(ERR, POWER, "Power library does not support lcore %u mapping to %u CPUs", lcore_id,
++ CPU_COUNT(&lcore_cpus));
@@ -173 +174 @@
-index 83f742f42a..82fb94d0c0 100644
+index c1c7139276..b928df941f 100644
@@ -176 +177 @@
-@@ -31,5 +31,6 @@ int open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
+@@ -27,5 +27,6 @@ int open_core_sysfs_file(FILE **f, const char *mode, const char *format, ...)
@@ -184 +185 @@
-index e73f4520d0..cc9305bdfe 100644
+index f2ba684c83..add477c804 100644
@@ -187 +188 @@
-@@ -368,7 +368,11 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
+@@ -362,7 +362,11 @@ power_cppc_cpufreq_init(unsigned int lcore_id)
@@ -193 +194 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u\n", lcore_id);
@@ -199 +200 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
@@ -201 +202 @@
-index 1c2a91a178..4755909466 100644
+index 5ca5f60bcd..890875bd93 100644
@@ -204 +205 @@
-@@ -570,7 +570,11 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
+@@ -564,7 +564,11 @@ power_pstate_cpufreq_init(unsigned int lcore_id)
@@ -210 +211 @@
-+ POWER_LOG(ERR, "Cannot get CPU ID mapped for lcore %u", lcore_id);
++ RTE_LOG(ERR, POWER, "Cannot get CPU ID mapped for lcore %u", lcore_id);
@@ -216 +217 @@
- POWER_LOG(ERR, "Cannot set governor of lcore %u to "
+ RTE_LOG(ERR, POWER, "Cannot set governor of lcore %u to "
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ionic: fix build with Fedora Rawhide' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (118 preceding siblings ...)
2024-11-11 6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-11-11 6:28 ` patch '' " Xueming Li
120 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
To: Timothy Redaelli; +Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e837f2e7286f9cae9004abae1b6b6f1c99a1b876
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e837f2e7286f9cae9004abae1b6b6f1c99a1b876 Mon Sep 17 00:00:00 2001
From: Timothy Redaelli <tredaelli@redhat.com>
Date: Thu, 24 Oct 2024 11:30:06 +0200
Subject: [PATCH] net/ionic: fix build with Fedora Rawhide
Cc: Xueming Li <xuemingl@nvidia.com>
Currently, a number of integer types are typedef'd to their corresponding
userspace or RTE values. This can be problematic if these types are
already defined somewhere else, as it would cause type collisions.
This patch changes the typedefs to #define macros which are only defined
if the types are not defined already.
Fixes: 5ef518098ec6 ("net/ionic: register and initialize adapter")
Signed-off-by: Timothy Redaelli <tredaelli@redhat.com>
---
drivers/net/ionic/ionic_osdep.h | 30 ++++++++++++++++++++++--------
1 file changed, 22 insertions(+), 8 deletions(-)
diff --git a/drivers/net/ionic/ionic_osdep.h b/drivers/net/ionic/ionic_osdep.h
index 68f767b920..97188dfd59 100644
--- a/drivers/net/ionic/ionic_osdep.h
+++ b/drivers/net/ionic/ionic_osdep.h
@@ -30,14 +30,28 @@
#define __iomem
-typedef uint8_t u8;
-typedef uint16_t u16;
-typedef uint32_t u32;
-typedef uint64_t u64;
-
-typedef uint16_t __le16;
-typedef uint32_t __le32;
-typedef uint64_t __le64;
+#ifndef u8
+#define u8 uint8_t
+#endif
+#ifndef u16
+#define u16 uint16_t
+#endif
+#ifndef u32
+#define u32 uint32_t
+#endif
+#ifndef u64
+#define u64 uint64_t
+#endif
+
+#ifndef __le16
+#define __le16 rte_le16_t
+#endif
+#ifndef __le32
+#define __le32 rte_le32_t
+#endif
+#ifndef __le64
+#define __le64 rte_le64_t
+#endif
#define ioread8(reg) rte_read8(reg)
#define ioread32(reg) rte_read32(rte_le_to_cpu_32(reg))
--
2.34.1
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch '' has been queued to stable release 23.11.3
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
` (119 preceding siblings ...)
2024-11-11 6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
@ 2024-11-11 6:28 ` Xueming Li
2024-12-07 7:59 ` patches " Xueming Li
120 siblings, 1 reply; 230+ messages in thread
From: Xueming Li @ 2024-11-11 6:28 UTC (permalink / raw)
Cc: xuemingl, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 11/30/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
@ 2024-11-12 9:02 ` David Marchand
2024-11-12 11:35 ` Xueming Li
0 siblings, 1 reply; 230+ messages in thread
From: David Marchand @ 2024-11-12 9:02 UTC (permalink / raw)
To: Xueming Li; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
Hello Xueming,
On Mon, Nov 11, 2024 at 7:30 AM Xueming Li <xuemingl@nvidia.com> wrote:
>
> Hi,
>
> FYI, your patch has been queued to stable release 23.11.3
>
> Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
> It will be pushed if I get no objections before 11/30/24. So please
> shout if anyone has objections.
>
> Also note that after the patch there's a diff of the upstream commit vs the
> patch applied to the branch. This will indicate if there was any rebasing
> needed to apply to the stable branch. If there were code changes for rebasing
> (ie: not only metadata diffs), please double check that the rebase was
> correctly done.
>
> Queued patches are on a temporary branch at:
> https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
>
> This queued commit can be viewed at:
> https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b6d04ef865b12f884aaf475adc454184cefae753
>
> Thanks.
>
> Xueming Li <xuemingl@nvidia.com>
>
> ---
> From b6d04ef865b12f884aaf475adc454184cefae753 Mon Sep 17 00:00:00 2001
> From: David Marchand <david.marchand@redhat.com>
> Date: Fri, 17 Nov 2023 14:18:23 +0100
> Subject: [PATCH] log: add a per line log helper
> Cc: Xueming Li <xuemingl@nvidia.com>
>
> [upstream commit ab550c1d6a0893f00198017a3a0e7cd402a667fd]
>
> gcc builtin __builtin_strchr can be used as a static assertion to check
> whether passed format strings contain a \n.
> This can be useful to detect double \n in log messages.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Why do we want this change backported?
--
David Marchand
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 9:02 ` David Marchand
@ 2024-11-12 11:35 ` Xueming Li
2024-11-12 12:47 ` David Marchand
0 siblings, 1 reply; 230+ messages in thread
From: Xueming Li @ 2024-11-12 11:35 UTC (permalink / raw)
To: David Marchand; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 2384 bytes --]
Hi David,
________________________________
> From: David Marchand <david.marchand@redhat.com>
> Sent: Tuesday, November 12, 2024 5:02 PM
> To: Xueming Li <xuemingl@nvidia.com>
> Cc: Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
> Subject: Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
>
> Hello Xueming,
>
> On Mon, Nov 11, 2024 at 7:30 AM Xueming Li <xuemingl@nvidia.com> wrote:
> >
> > Hi,
> >
> > FYI, your patch has been queued to stable release 23.11.3
> >
> > Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
> > It will be pushed if I get no objections before 11/30/24. So please
> > shout if anyone has objections.
> >
> > Also note that after the patch there's a diff of the upstream commit vs the
> > patch applied to the branch. This will indicate if there was any rebasing
> > needed to apply to the stable branch. If there were code changes for rebasing
> > (ie: not only metadata diffs), please double check that the rebase was
> > correctly done.
> >
> > Queued patches are on a temporary branch at:
> > https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
> >
> > This queued commit can be viewed at:
> > https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b6d04ef865b12f884aaf475adc454184cefae753
> >
> > Thanks.
> >
> > Xueming Li <xuemingl@nvidia.com>
> >
> > ---
> > From b6d04ef865b12f884aaf475adc454184cefae753 Mon Sep 17 00:00:00 2001
> > From: David Marchand <david.marchand@redhat.com>
> > Date: Fri, 17 Nov 2023 14:18:23 +0100
> > Subject: [PATCH] log: add a per line log helper
> > Cc: Xueming Li <xuemingl@nvidia.com>
> >
> > [upstream commit ab550c1d6a0893f00198017a3a0e7cd402a667fd]
> >
> > gcc builtin __builtin_strchr can be used as a static assertion to check
> > whether passed format strings contain a \n.
> > This can be useful to detect double \n in log messages.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > Acked-by: Stephen Hemminger <stephen@networkplumber.org>
> > Acked-by: Chengwen Feng <fengchengwen@huawei.com>
>
> Why do we want this change backported?
It's a dependency of below patch to backport, any suggestion?
- f665790a5d drivers: remove redundant newline from logs
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 9763 bytes --]
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 11:35 ` Xueming Li
@ 2024-11-12 12:47 ` David Marchand
2024-11-12 13:56 ` Xueming Li
0 siblings, 1 reply; 230+ messages in thread
From: David Marchand @ 2024-11-12 12:47 UTC (permalink / raw)
To: Xueming Li; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
On Tue, Nov 12, 2024 at 12:35 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > Why do we want this change backported?
>
> It's a dependency of below patch to backport, any suggestion?
> - f665790a5d drivers: remove redundant newline from logs
In theory, there should be no such dependency.
This f665790a5d change is supposed to remove only extra \n and nothing more.
Could you give more detail?
--
David Marchand
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 12:47 ` David Marchand
@ 2024-11-12 13:56 ` Xueming Li
2024-11-12 14:09 ` David Marchand
0 siblings, 1 reply; 230+ messages in thread
From: Xueming Li @ 2024-11-12 13:56 UTC (permalink / raw)
To: David Marchand; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 852 bytes --]
RTE_LOG_DP_LINE is called.
________________________________
From: David Marchand <david.marchand@redhat.com>
Sent: Tuesday, November 12, 2024 8:47 PM
To: Xueming Li <xuemingl@nvidia.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
Subject: Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
On Tue, Nov 12, 2024 at 12:35 PM Xueming Li <xuemingl@nvidia.com> wrote:
> > Why do we want this change backported?
>
> It's a dependency of below patch to backport, any suggestion?
> - f665790a5d drivers: remove redundant newline from logs
In theory, there should be no such dependency.
This f665790a5d change is supposed to remove only extra \n and nothing more.
Could you give more detail?
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 1897 bytes --]
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 13:56 ` Xueming Li
@ 2024-11-12 14:09 ` David Marchand
2024-11-12 14:11 ` Xueming Li
0 siblings, 1 reply; 230+ messages in thread
From: David Marchand @ 2024-11-12 14:09 UTC (permalink / raw)
To: Xueming Li; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
On Tue, Nov 12, 2024 at 2:57 PM Xueming Li <xuemingl@nvidia.com> wrote:
>
> RTE_LOG_DP_LINE is called.
Oh indeed, that's an error in the commit f665790a5d.
The hunk on drivers/crypto/dpaa_sec/dpaa_sec_log.h should be dropped.
--
David Marchand
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
2024-11-12 14:09 ` David Marchand
@ 2024-11-12 14:11 ` Xueming Li
0 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-11-12 14:11 UTC (permalink / raw)
To: David Marchand; +Cc: Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 711 bytes --]
Thanks for update, I'll take it out, then remove this patch.
________________________________
From: David Marchand <david.marchand@redhat.com>
Sent: Tuesday, November 12, 2024 10:09 PM
To: Xueming Li <xuemingl@nvidia.com>
Cc: Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
Subject: Re: patch 'log: add a per line log helper' has been queued to stable release 23.11.3
On Tue, Nov 12, 2024 at 2:57 PM Xueming Li <xuemingl@nvidia.com> wrote:
>
> RTE_LOG_DP_LINE is called.
Oh indeed, that's an error in the commit f665790a5d.
The hunk on drivers/crypto/dpaa_sec/dpaa_sec_log.h should be dropped.
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 1711 bytes --]
^ permalink raw reply [flat|nested] 230+ messages in thread
* patches has been queued to stable release 23.11.3
2024-11-11 6:28 ` patch '' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/netvsc: fix using Tx queue higher than Rx queues' " Xueming Li
` (96 more replies)
0 siblings, 97 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Xueming Li; +Cc: dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=87f65fa28ad09345f55e58e35e2c856ff4fc8bc2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 87f65fa28ad09345f55e58e35e2c856ff4fc8bc2 Mon Sep 17 00:00:00 2001
From: Xueming Li <xuemingl@nvidia.com>
Date: Fri, 6 Dec 2024 23:26:43 +0800
Subject: [PATCH] *** SUBJECT HERE ***
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
Ajit Khaparde (1):
net/bnxt: fix TCP and UDP checksum flags
Alan Elder (1):
net/netvsc: fix using Tx queue higher than Rx queues
Alexander Kozyrev (3):
common/mlx5: fix error CQE handling for 128 bytes CQE
net/mlx5: fix shared queue port number in vector Rx
net/mlx5: fix miniCQEs number calculation
Arkadiusz Kusztal (2):
crypto/qat: fix modexp/inv length
crypto/qat: fix ECDSA session handling
Bing Zhao (4):
net/mlx5: fix Rx queue control management
net/mlx5: fix default RSS flows creation order
net/mlx5: fix Rx queue reference count in flushing flows
net/mlx5: fix shared Rx queue control release
Brian Dooley (1):
test/crypto: fix synchronous API calls
Bruce Richardson (4):
net/ice: detect stopping a flow director queue twice
app/dumpcap: remove unused struct array
eventdev: fix possible array underflow/overflow
net/iavf: add segment-length check to Tx prep
Chengwen Feng (3):
net/hns3: restrict tunnel flow rule to one header
net/hns3: register VLAN flow match mode parameter
net/mvneta: fix possible out-of-bounds write
Danylo Vodopianov (1):
app/testpmd: fix aged flow destroy
Dariusz Sosnowski (1):
net/mlx5: fix counter query loop getting stuck
David Marchand (2):
crypto/openssl: fix 3DES-CTR with big endian CPUs
eal/unix: optimize thread creation
Dengdui Huang (3):
net/hns3: remove ROH devices
net/hns3: fix error code for repeatedly create counter
net/hns3: fix fully use hardware flow director table
Erez Shitrit (1):
net/mlx5/hws: fix allocation of STCs
Farah Smith (1):
net/bnxt/tf_core: fix Thor TF EM key size check
Fidaullah Noonari (1):
app/procinfo: fix leak on exit
Gagandeep Singh (1):
net/dpaa2: fix memory corruption in TM
Gregory Etelson (6):
net/mlx5: fix GRE flow item translation for root table
net/mlx5/hws: fix range definer error recovery
net/mlx5: fix SQ flow item size
net/mlx5: fix non-template flow action validation
net/mlx5: fix SWS meter state initialization
net/mlx5: fix indirect list flow action callback invocation
Hanumanth Pothula (1):
event/octeontx: fix possible integer overflow
Harman Kalra (1):
common/cnxk: fix double free of flow aging resources
Hemant Agrawal (2):
examples/l2fwd-event: fix spinlock handling
bus/dpaa: fix lock condition during error handling
Huisong Li (1):
examples/l3fwd-power: fix options parsing overflow
Igor Gutorov (1):
net/mlx5: fix reported Rx/Tx descriptor limits
Jiawen Wu (9):
net/txgbe: fix SWFW mbox
net/txgbe: fix VF-PF mbox interrupt
net/txgbe: remove outer UDP checksum capability
net/txgbe: fix driver load bit to inform firmware
net/ngbe: fix driver load bit to inform firmware
net/ngbe: reconfigure more MAC Rx registers
net/ngbe: fix interrupt lost in legacy or MSI mode
net/ngbe: restrict configuration of VLAN strip offload
net/txgbe: fix a mass of interrupts
Konstantin Ananyev (1):
examples/l3fwd: fix read beyond boundaries
Lewis Donzis (1):
net/ixgbe: fix link status delay on FreeBSD
Long Li (1):
net/netvsc: force Tx VLAN offload on 801.2Q packet
Martin Weiser (1):
net/igc: fix Rx buffers when timestamping enabled
Morten Brørup (2):
net/vmxnet3: fix potential out of bounds stats access
net/vmxnet3: support larger MTU with version 6
Nicolas Chautru (1):
baseband/acc: fix ring memory allocation
Peter Morrow (1):
net/bnxt: fix reading SFF-8436 SFP EEPROMs
Peter Spreadborough (1):
net/bnxt: fix bad action offset in Tx BD
Praveen Shetty (1):
net/cpfl: fix forwarding to physical port
Roger Melton (1):
net/vmxnet3: fix crash after configuration failure
Rohit Raj (1):
bus/fslmc: fix Coverity warnings in QBMAN
Sangtani Parag Satishbhai (1):
net/bnxt/tf_core: fix slice count in case of HA entry move
Shahaji Bhosle (2):
net/bnxt/tf_core: fix WC TCAM multi-slice delete
net/bnxt/tf_core: fix TCAM manager data corruption
Shani Peretz (1):
common/mlx5: fix misalignment
Shun Hao (1):
net/mlx5: fix memory leak in metering
Stephen Hemminger (22):
test/bonding: remove redundant info query
examples/ntb: check info query return
crypto/openssl: fix potential string overflow
net/bnx2x: remove dead conditional
net/bnx2x: fix always true expression
net/bnx2x: fix possible infinite loop at startup
net/bnx2x: fix duplicate branch
net/dpaa2: remove unnecessary check for null before free
common/dpaax/caamflib: enable fallthrough warnings
net/e1000/base: fix fallthrough in switch
member: fix choice of bucket for displacement
vhost: fix deadlock in Rx async path
pcapng: avoid potential unaligned data
test/bonding: fix loop on members
test/bonding: fix MAC address comparison
test/security: fix IPv6 extension loop
test/event: avoid duplicate initialization
test/eal: fix loop coverage for alignment macros
test/eal: fix lcore check
app/testpmd: remove redundant policy action condition
app/testpmd: avoid potential outside of array reference
doc: correct definition of stats per queue feature
Sunil Kumar Kori (2):
common/cnxk: fix build on Ubuntu 24.04
net/cnxk: fix build on Ubuntu 24.04
Thomas Monjalon (1):
devtools: fix check of multiple commits fixed at once
Tim Martin (2):
net/mlx5: fix real time counter reading from PCI BAR
net/mlx5: fix Tx tracing to use single clock source
Viacheslav Ovsiienko (1):
net/mlx5: fix trace script for multiple burst completion
Vladimir Medvedkin (1):
net/i40e: check register read for outer VLAN
.mailmap | 5 +-
app/dumpcap/main.c | 1 -
app/proc-info/main.c | 5 +-
app/test-pmd/cmdline_flow.c | 2 +-
app/test-pmd/config.c | 9 +-
app/test/test_common.c | 31 ++++---
app/test/test_cryptodev.c | 8 ++
app/test/test_eal_flags.c | 4 +-
app/test/test_event_crypto_adapter.c | 24 +++--
app/test/test_link_bonding.c | 9 +-
app/test/test_link_bonding_rssconf.c | 1 -
app/test/test_security_inline_proto_vectors.h | 8 +-
devtools/git-log-fixes.sh | 2 +-
doc/guides/nics/features.rst | 34 ++++---
doc/guides/nics/mlx5.rst | 71 +++++++++++++++
drivers/baseband/acc/acc_common.h | 2 +-
drivers/bus/dpaa/base/qbman/qman.c | 8 +-
drivers/bus/fslmc/qbman/qbman_debug.c | 49 ++++++----
drivers/common/cnxk/roc_irq.c | 2 +-
drivers/common/cnxk/roc_npc.c | 3 +-
.../common/dpaax/caamflib/rta/operation_cmd.h | 4 -
drivers/common/mlx5/mlx5_common_utils.h | 2 +-
drivers/common/mlx5/mlx5_devx_cmds.c | 1 +
drivers/common/mlx5/mlx5_devx_cmds.h | 1 +
drivers/common/mlx5/mlx5_prm.h | 33 ++++++-
drivers/common/mlx5/windows/mlx5_win_defs.h | 12 ---
drivers/compress/mlx5/mlx5_compress.c | 4 +-
drivers/crypto/mlx5/mlx5_crypto_gcm.c | 2 +-
drivers/crypto/mlx5/mlx5_crypto_xts.c | 2 +-
drivers/crypto/openssl/rte_openssl_pmd.c | 30 +++----
drivers/crypto/qat/qat_asym.c | 43 ++++++++-
drivers/event/octeontx/ssovf_evdev.c | 16 +++-
drivers/net/bnx2x/bnx2x.c | 19 ++--
drivers/net/bnx2x/bnx2x_stats.c | 4 -
drivers/net/bnxt/bnxt_ethdev.c | 1 -
drivers/net/bnxt/bnxt_txr.c | 17 ++--
drivers/net/bnxt/tf_core/cfa_tcam_mgr.c | 11 ++-
drivers/net/bnxt/tf_core/tf_msg.c | 30 +++----
drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c | 16 +++-
drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
drivers/net/cpfl/cpfl_flow_engine_fxp.c | 5 --
drivers/net/dpaa2/dpaa2_ethdev.c | 3 +-
drivers/net/dpaa2/dpaa2_tm.c | 29 +++---
drivers/net/e1000/base/e1000_82575.c | 1 +
drivers/net/e1000/base/e1000_api.c | 1 +
drivers/net/e1000/base/meson.build | 3 +-
drivers/net/hns3/hns3_cmd.c | 4 +-
drivers/net/hns3/hns3_common.c | 2 +-
drivers/net/hns3/hns3_common.h | 2 +-
drivers/net/hns3/hns3_ethdev.c | 5 +-
drivers/net/hns3/hns3_ethdev.h | 2 -
drivers/net/hns3/hns3_fdir.c | 1 +
drivers/net/hns3/hns3_flow.c | 7 +-
drivers/net/i40e/i40e_flow.c | 77 +++++++++++++---
drivers/net/iavf/iavf_rxtx.c | 6 +-
drivers/net/ice/ice_rxtx.c | 5 ++
drivers/net/igc/igc_txrx.c | 26 ++++++
drivers/net/ixgbe/ixgbe_ethdev.c | 5 --
drivers/net/mlx5/hws/mlx5dr.h | 4 +-
drivers/net/mlx5/hws/mlx5dr_context.c | 9 +-
drivers/net/mlx5/hws/mlx5dr_definer.c | 1 +
drivers/net/mlx5/linux/mlx5_os.c | 8 +-
drivers/net/mlx5/mlx5.c | 6 +-
drivers/net/mlx5/mlx5.h | 2 +-
drivers/net/mlx5/mlx5_ethdev.c | 4 +
drivers/net/mlx5/mlx5_flow.c | 6 +-
drivers/net/mlx5/mlx5_flow.h | 14 ++-
drivers/net/mlx5/mlx5_flow_aso.c | 6 +-
drivers/net/mlx5/mlx5_flow_dv.c | 23 ++---
drivers/net/mlx5/mlx5_flow_hw.c | 27 ++++--
drivers/net/mlx5/mlx5_flow_meter.c | 6 +-
drivers/net/mlx5/mlx5_hws_cnt.c | 46 ++++++----
drivers/net/mlx5/mlx5_rx.c | 2 +-
drivers/net/mlx5/mlx5_rx.h | 3 +-
drivers/net/mlx5/mlx5_rxq.c | 38 +++++---
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 15 ++--
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 27 +++---
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 9 +-
drivers/net/mlx5/mlx5_trace.h | 9 +-
drivers/net/mlx5/mlx5_tx.c | 8 +-
drivers/net/mlx5/mlx5_tx.h | 53 ++++++++++-
drivers/net/mlx5/mlx5_txpp.c | 11 +--
drivers/net/mlx5/mlx5_txq.c | 8 ++
drivers/net/mlx5/tools/mlx5_trace.py | 17 ++--
drivers/net/mlx5/windows/mlx5_os.c | 8 +-
drivers/net/mvneta/mvneta_ethdev.c | 6 ++
drivers/net/netvsc/hn_ethdev.c | 9 ++
drivers/net/netvsc/hn_rxtx.c | 89 ++++++++++++++++---
drivers/net/ngbe/base/ngbe_regs.h | 2 +
drivers/net/ngbe/ngbe_ethdev.c | 73 +++++++++------
drivers/net/txgbe/base/txgbe_mng.c | 1 +
drivers/net/txgbe/base/txgbe_regs.h | 2 +-
drivers/net/txgbe/txgbe_ethdev.c | 10 +++
drivers/net/txgbe/txgbe_rxtx.c | 3 +-
drivers/net/vmxnet3/vmxnet3_ethdev.c | 35 +++++---
drivers/net/vmxnet3/vmxnet3_ethdev.h | 4 +-
examples/l2fwd-event/l2fwd_event.c | 1 +
examples/l3fwd-power/main.c | 41 ++++-----
examples/l3fwd/l3fwd_altivec.h | 6 +-
examples/l3fwd/l3fwd_common.h | 7 ++
examples/l3fwd/l3fwd_em_hlm.h | 2 +-
examples/l3fwd/l3fwd_em_sequential.h | 2 +-
examples/l3fwd/l3fwd_fib.c | 2 +-
examples/l3fwd/l3fwd_lpm_altivec.h | 2 +-
examples/l3fwd/l3fwd_lpm_neon.h | 2 +-
examples/l3fwd/l3fwd_lpm_sse.h | 2 +-
examples/l3fwd/l3fwd_neon.h | 6 +-
examples/l3fwd/l3fwd_sse.h | 6 +-
examples/ntb/ntb_fwd.c | 5 +-
lib/eal/unix/meson.build | 5 ++
lib/eal/unix/rte_thread.c | 25 ++++++
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
lib/member/rte_member_ht.c | 2 +-
lib/pcapng/rte_pcapng.c | 10 +--
lib/vhost/virtio_net.c | 2 +-
115 files changed, 993 insertions(+), 453 deletions(-)
--
2.34.1
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/netvsc: fix using Tx queue higher than Rx queues' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: restrict tunnel flow rule to one header' " Xueming Li
` (95 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Alan Elder; +Cc: Xueming Li, Long Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=379d49883364269eceafc38ffcc6f30c46655fbd
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 379d49883364269eceafc38ffcc6f30c46655fbd Mon Sep 17 00:00:00 2001
From: Alan Elder <alan.elder@microsoft.com>
Date: Thu, 17 Oct 2024 12:20:29 -0700
Subject: [PATCH] net/netvsc: fix using Tx queue higher than Rx queues
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e90020535c03cf9e60448ba623cac3301f111dae ]
The previous code allowed the number of Tx queues to be set higher than
the number of Rx queues. If a packet was sent on a Tx queue with index
>= number Rx queues there was a segfault due to accessing beyond the end
of the dev->data->rx_queues[] array.
This commit fixes the issue by creating an Rx queue for every Tx queue
meaning that an event buffer is allocated to handle receiving Tx
completion messages.
mbuf pool and Rx ring are not allocated for these additional Rx queues
and RSS configuration ensures that no packets are received on them.
Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device")
Signed-off-by: Alan Elder <alan.elder@microsoft.com>
Signed-off-by: Long Li <longli@microsoft.com>
---
drivers/net/netvsc/hn_ethdev.c | 9 +++++
drivers/net/netvsc/hn_rxtx.c | 69 +++++++++++++++++++++++++++++-----
2 files changed, 69 insertions(+), 9 deletions(-)
diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c
index f8cb05a118..1736cb5d07 100644
--- a/drivers/net/netvsc/hn_ethdev.c
+++ b/drivers/net/netvsc/hn_ethdev.c
@@ -313,6 +313,15 @@ static int hn_rss_reta_update(struct rte_eth_dev *dev,
if (reta_conf[idx].mask & mask)
hv->rss_ind[i] = reta_conf[idx].reta[shift];
+
+ /*
+ * Ensure we don't allow config that directs traffic to an Rx
+ * queue that we aren't going to poll
+ */
+ if (hv->rss_ind[i] >= dev->data->nb_rx_queues) {
+ PMD_DRV_LOG(ERR, "RSS distributing traffic to invalid Rx queue");
+ return -EINVAL;
+ }
}
err = hn_rndis_conf_rss(hv, NDIS_RSS_FLAG_DISABLE);
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 297ff3fb31..5777a14d70 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -234,6 +234,17 @@ static void hn_reset_txagg(struct hn_tx_queue *txq)
txq->agg_prevpkt = NULL;
}
+static void
+hn_rx_queue_free_common(struct hn_rx_queue *rxq)
+{
+ if (!rxq)
+ return;
+
+ rte_free(rxq->rxbuf_info);
+ rte_free(rxq->event_buf);
+ rte_free(rxq);
+}
+
int
hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
uint16_t queue_idx, uint16_t nb_desc,
@@ -243,6 +254,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
{
struct hn_data *hv = dev->data->dev_private;
struct hn_tx_queue *txq;
+ struct hn_rx_queue *rxq = NULL;
char name[RTE_MEMPOOL_NAMESIZE];
uint32_t tx_free_thresh;
int err = -ENOMEM;
@@ -301,6 +313,27 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
goto error;
}
+ /*
+ * If there are more Tx queues than Rx queues, allocate rx_queues
+ * with event buffer so that Tx completion messages can still be
+ * received
+ */
+ if (queue_idx >= dev->data->nb_rx_queues) {
+ rxq = hn_rx_queue_alloc(hv, queue_idx, socket_id);
+
+ if (!rxq) {
+ err = -ENOMEM;
+ goto error;
+ }
+
+ /*
+ * Don't allocate mbuf pool or rx ring. RSS is always configured
+ * to ensure packets aren't received by this Rx queue.
+ */
+ rxq->mb_pool = NULL;
+ rxq->rx_ring = NULL;
+ }
+
txq->agg_szmax = RTE_MIN(hv->chim_szmax, hv->rndis_agg_size);
txq->agg_pktmax = hv->rndis_agg_pkts;
txq->agg_align = hv->rndis_agg_align;
@@ -311,12 +344,15 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
socket_id, tx_conf);
if (err == 0) {
dev->data->tx_queues[queue_idx] = txq;
+ if (rxq != NULL)
+ dev->data->rx_queues[queue_idx] = rxq;
return 0;
}
error:
rte_mempool_free(txq->txdesc_pool);
rte_memzone_free(txq->tx_rndis_mz);
+ hn_rx_queue_free_common(rxq);
rte_free(txq);
return err;
}
@@ -363,6 +399,12 @@ hn_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
if (!txq)
return;
+ /*
+ * Free any Rx queues allocated for a Tx queue without a corresponding
+ * Rx queue
+ */
+ if (qid >= dev->data->nb_rx_queues)
+ hn_rx_queue_free_common(dev->data->rx_queues[qid]);
rte_mempool_free(txq->txdesc_pool);
@@ -552,10 +594,12 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
const struct hn_rxinfo *info)
{
struct hn_data *hv = rxq->hv;
- struct rte_mbuf *m;
+ struct rte_mbuf *m = NULL;
bool use_extbuf = false;
- m = rte_pktmbuf_alloc(rxq->mb_pool);
+ if (likely(rxq->mb_pool != NULL))
+ m = rte_pktmbuf_alloc(rxq->mb_pool);
+
if (unlikely(!m)) {
struct rte_eth_dev *dev =
&rte_eth_devices[rxq->port_id];
@@ -942,7 +986,15 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
if (queue_idx == 0) {
rxq = hv->primary;
} else {
- rxq = hn_rx_queue_alloc(hv, queue_idx, socket_id);
+ /*
+ * If the number of Tx queues was previously greater than the
+ * number of Rx queues, we may already have allocated an rxq.
+ */
+ if (!dev->data->rx_queues[queue_idx])
+ rxq = hn_rx_queue_alloc(hv, queue_idx, socket_id);
+ else
+ rxq = dev->data->rx_queues[queue_idx];
+
if (!rxq)
return -ENOMEM;
}
@@ -975,9 +1027,10 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
fail:
rte_ring_free(rxq->rx_ring);
- rte_free(rxq->rxbuf_info);
- rte_free(rxq->event_buf);
- rte_free(rxq);
+ /* Only free rxq if it was created in this function. */
+ if (!dev->data->rx_queues[queue_idx])
+ hn_rx_queue_free_common(rxq);
+
return error;
}
@@ -998,9 +1051,7 @@ hn_rx_queue_free(struct hn_rx_queue *rxq, bool keep_primary)
if (keep_primary && rxq == rxq->hv->primary)
return;
- rte_free(rxq->rxbuf_info);
- rte_free(rxq->event_buf);
- rte_free(rxq);
+ hn_rx_queue_free_common(rxq);
}
void
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.183098225 +0800
+++ 0001-net-netvsc-fix-using-Tx-queue-higher-than-Rx-queues.patch 2024-12-06 23:26:43.833044829 +0800
@@ -1 +1 @@
-From e90020535c03cf9e60448ba623cac3301f111dae Mon Sep 17 00:00:00 2001
+From 379d49883364269eceafc38ffcc6f30c46655fbd Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e90020535c03cf9e60448ba623cac3301f111dae ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -49 +51 @@
-index 870f62e5fa..52aedb001f 100644
+index 297ff3fb31..5777a14d70 100644
@@ -52 +54 @@
-@@ -222,6 +222,17 @@ static void hn_reset_txagg(struct hn_tx_queue *txq)
+@@ -234,6 +234,17 @@ static void hn_reset_txagg(struct hn_tx_queue *txq)
@@ -70 +72 @@
-@@ -231,6 +242,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
+@@ -243,6 +254,7 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
@@ -78 +80 @@
-@@ -289,6 +301,27 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
+@@ -301,6 +313,27 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
@@ -106 +108 @@
-@@ -299,12 +332,15 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
+@@ -311,12 +344,15 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev,
@@ -122 +124 @@
-@@ -351,6 +387,12 @@ hn_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
+@@ -363,6 +399,12 @@ hn_dev_tx_queue_release(struct rte_eth_dev *dev, uint16_t qid)
@@ -135 +137 @@
-@@ -540,10 +582,12 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
+@@ -552,10 +594,12 @@ static void hn_rxpkt(struct hn_rx_queue *rxq, struct hn_rx_bufinfo *rxb,
@@ -150 +152 @@
-@@ -930,7 +974,15 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
+@@ -942,7 +986,15 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
@@ -167 +169 @@
-@@ -963,9 +1015,10 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
+@@ -975,9 +1027,10 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev,
@@ -181 +183 @@
-@@ -986,9 +1039,7 @@ hn_rx_queue_free(struct hn_rx_queue *rxq, bool keep_primary)
+@@ -998,9 +1051,7 @@ hn_rx_queue_free(struct hn_rx_queue *rxq, bool keep_primary)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: restrict tunnel flow rule to one header' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
2024-12-07 7:59 ` patch 'net/netvsc: fix using Tx queue higher than Rx queues' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: register VLAN flow match mode parameter' " Xueming Li
` (94 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Chengwen Feng; +Cc: Xueming Li, Jie Hai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=8d0ca45d12f9b719c3c3e714024b24b11546acfe
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 8d0ca45d12f9b719c3c3e714024b24b11546acfe Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Fri, 18 Oct 2024 14:19:38 +0800
Subject: [PATCH] net/hns3: restrict tunnel flow rule to one header
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8887c207b9373a1875031c5346706f698322d66d ]
The device's flow director supports a maximum of one tunnel header, if
passed more than one tunnel header from rte-flow API, the driver should
return error.
Fixes: fcba820d9b9e ("net/hns3: support flow director")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_flow.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index 7fbe65313c..a32fb10ddf 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -1182,6 +1182,11 @@ hns3_parse_tunnel(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
"Tunnel packets must configure "
"with mask");
+ if (rule->key_conf.spec.tunnel_type != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item, "Too many tunnel headers!");
+
switch (item->type) {
case RTE_FLOW_ITEM_TYPE_VXLAN:
case RTE_FLOW_ITEM_TYPE_VXLAN_GPE:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.217150325 +0800
+++ 0002-net-hns3-restrict-tunnel-flow-rule-to-one-header.patch 2024-12-06 23:26:43.833044829 +0800
@@ -1 +1 @@
-From 8887c207b9373a1875031c5346706f698322d66d Mon Sep 17 00:00:00 2001
+From 8d0ca45d12f9b719c3c3e714024b24b11546acfe Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8887c207b9373a1875031c5346706f698322d66d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index bf1eee506d..5586708a5d 100644
+index 7fbe65313c..a32fb10ddf 100644
@@ -23 +25 @@
-@@ -1221,6 +1221,11 @@ hns3_parse_tunnel(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
+@@ -1182,6 +1182,11 @@ hns3_parse_tunnel(const struct rte_flow_item *item, struct hns3_fdir_rule *rule,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: register VLAN flow match mode parameter' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
2024-12-07 7:59 ` patch 'net/netvsc: fix using Tx queue higher than Rx queues' " Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: restrict tunnel flow rule to one header' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/ice: detect stopping a flow director queue twice' " Xueming Li
` (93 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Chengwen Feng; +Cc: Xueming Li, Jie Hai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=78b0870687c5a1f5991b3d140339ba02a2ca14e8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 78b0870687c5a1f5991b3d140339ba02a2ca14e8 Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Fri, 18 Oct 2024 14:19:40 +0800
Subject: [PATCH] net/hns3: register VLAN flow match mode parameter
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit bf16032eb1e62338e02b1278e10033366448c5bc ]
This commit adds fdir_vlan_match_mode in RTE_PMD_REGISTER_PARAM_STRING.
Fixes: 06b9ee343940 ("net/hns3: add VLAN match mode runtime config")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_common.c | 2 +-
drivers/net/hns3/hns3_common.h | 2 +-
drivers/net/hns3/hns3_ethdev.c | 3 ++-
3 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/drivers/net/hns3/hns3_common.c b/drivers/net/hns3/hns3_common.c
index 5e6cdfdaa0..7a36673c95 100644
--- a/drivers/net/hns3/hns3_common.c
+++ b/drivers/net/hns3/hns3_common.c
@@ -308,7 +308,7 @@ hns3_parse_devargs(struct rte_eth_dev *dev)
&hns3_parse_mbx_time_limit, &mbx_time_limit_ms);
if (!hns->is_vf)
(void)rte_kvargs_process(kvlist,
- HNS3_DEVARG_FDIR_VALN_MATCH_MODE,
+ HNS3_DEVARG_FDIR_VLAN_MATCH_MODE,
&hns3_parse_vlan_match_mode,
&hns->pf.fdir.vlan_match_mode);
diff --git a/drivers/net/hns3/hns3_common.h b/drivers/net/hns3/hns3_common.h
index cf9593bd0c..166852026f 100644
--- a/drivers/net/hns3/hns3_common.h
+++ b/drivers/net/hns3/hns3_common.h
@@ -27,7 +27,7 @@ enum {
#define HNS3_DEVARG_MBX_TIME_LIMIT_MS "mbx_time_limit_ms"
-#define HNS3_DEVARG_FDIR_VALN_MATCH_MODE "fdir_vlan_match_mode"
+#define HNS3_DEVARG_FDIR_VLAN_MATCH_MODE "fdir_vlan_match_mode"
#define MSEC_PER_SEC 1000L
#define USEC_PER_MSEC 1000L
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 6e72730d75..56adde3c66 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -6669,7 +6669,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_hns3,
HNS3_DEVARG_RX_FUNC_HINT "=vec|sve|simple|common "
HNS3_DEVARG_TX_FUNC_HINT "=vec|sve|simple|common "
HNS3_DEVARG_DEV_CAPS_MASK "=<1-65535> "
- HNS3_DEVARG_MBX_TIME_LIMIT_MS "=<uint16> ");
+ HNS3_DEVARG_MBX_TIME_LIMIT_MS "=<uint16> "
+ HNS3_DEVARG_FDIR_VLAN_MATCH_MODE "=strict|nostrict ");
RTE_LOG_REGISTER_SUFFIX(hns3_logtype_init, init, NOTICE);
RTE_LOG_REGISTER_SUFFIX(hns3_logtype_driver, driver, NOTICE);
#ifdef RTE_ETHDEV_DEBUG_RX
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.247427224 +0800
+++ 0003-net-hns3-register-VLAN-flow-match-mode-parameter.patch 2024-12-06 23:26:43.843044829 +0800
@@ -1 +1 @@
-From bf16032eb1e62338e02b1278e10033366448c5bc Mon Sep 17 00:00:00 2001
+From 78b0870687c5a1f5991b3d140339ba02a2ca14e8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit bf16032eb1e62338e02b1278e10033366448c5bc ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
@@ -46 +48 @@
-index 8b43d731ac..5c7d893736 100644
+index 6e72730d75..56adde3c66 100644
@@ -49 +51 @@
-@@ -6670,7 +6670,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_hns3,
+@@ -6669,7 +6669,8 @@ RTE_PMD_REGISTER_PARAM_STRING(net_hns3,
@@ -55 +57 @@
-+ HNS3_DEVARG_FDIR_VLAN_MATCH_MODE "=strict|nostrict "
++ HNS3_DEVARG_FDIR_VLAN_MATCH_MODE "=strict|nostrict ");
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ice: detect stopping a flow director queue twice' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (2 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/hns3: register VLAN flow match mode parameter' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/ixgbe: fix link status delay on FreeBSD' " Xueming Li
` (92 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Xueming Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d366bfb9c814b20c4e7ae7e454117e820be7970d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d366bfb9c814b20c4e7ae7e454117e820be7970d Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Tue, 22 Oct 2024 17:39:41 +0100
Subject: [PATCH] net/ice: detect stopping a flow director queue twice
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7b230d43e8061bdaba02a41f601bb8e0b5dbff03 ]
If the flow-director queue is stopped at some point during the running
of an application, the shutdown procedure for the port issues an error
as it tries to stop the queue a second time, and fails to do so. We can
eliminate this error by setting the tail-register pointer to NULL on
stop, and checking for that condition in subsequent stop calls. Since
the register pointer is set on start, any restarting of the queue will
allow a stop call to progress as normal.
Fixes: 84dc7a95a2d3 ("net/ice: enable flow director engine")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/ice/ice_rxtx.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c
index 7da314217a..644d106814 100644
--- a/drivers/net/ice/ice_rxtx.c
+++ b/drivers/net/ice/ice_rxtx.c
@@ -1123,6 +1123,10 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
tx_queue_id);
return -EINVAL;
}
+ if (txq->qtx_tail == NULL) {
+ PMD_DRV_LOG(INFO, "TX queue %u not started", tx_queue_id);
+ return 0;
+ }
vsi = txq->vsi;
q_ids[0] = txq->reg_idx;
@@ -1137,6 +1141,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
}
txq->tx_rel_mbufs(txq);
+ txq->qtx_tail = NULL;
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.278953624 +0800
+++ 0004-net-ice-detect-stopping-a-flow-director-queue-twice.patch 2024-12-06 23:26:43.843044829 +0800
@@ -1 +1 @@
-From 7b230d43e8061bdaba02a41f601bb8e0b5dbff03 Mon Sep 17 00:00:00 2001
+From d366bfb9c814b20c4e7ae7e454117e820be7970d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7b230d43e8061bdaba02a41f601bb8e0b5dbff03 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index d2f9edc221..024d97cb46 100644
+index 7da314217a..644d106814 100644
@@ -27 +29 @@
-@@ -1139,6 +1139,10 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+@@ -1123,6 +1123,10 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
@@ -38 +40 @@
-@@ -1153,6 +1157,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
+@@ -1137,6 +1141,7 @@ ice_fdir_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ixgbe: fix link status delay on FreeBSD' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (3 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/ice: detect stopping a flow director queue twice' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mvneta: fix possible out-of-bounds write' " Xueming Li
` (91 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Lewis Donzis; +Cc: Xueming Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d30bef9034c7ebb54096d40326bcab0ed78e92d3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d30bef9034c7ebb54096d40326bcab0ed78e92d3 Mon Sep 17 00:00:00 2001
From: Lewis Donzis <lew@perftech.com>
Date: Tue, 22 Oct 2024 09:42:05 -0500
Subject: [PATCH] net/ixgbe: fix link status delay on FreeBSD
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f775386d92d68e534600fcff3fc4bcaa30d3e68c ]
Forcing wait true prevents checking link status without delay, because
the function will wait more than 10 seconds for link status to be true,
even if the API caller requested no waiting.
Fixes: 0012111a3d87 ("net/ixgbe: fix link status synchronization on BSD")
Signed-off-by: Lewis Donzis <lew@perftech.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/ixgbe/ixgbe_ethdev.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 3ac65ca3b3..f4ec485d69 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -4305,11 +4305,6 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
if (wait_to_complete == 0 || dev->data->dev_conf.intr_conf.lsc != 0)
wait = 0;
-/* BSD has no interrupt mechanism, so force NIC status synchronization. */
-#ifdef RTE_EXEC_ENV_FREEBSD
- wait = 1;
-#endif
-
if (vf)
diag = ixgbevf_check_link(hw, &link_speed, &link_up, wait);
else
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.310023424 +0800
+++ 0005-net-ixgbe-fix-link-status-delay-on-FreeBSD.patch 2024-12-06 23:26:43.843044829 +0800
@@ -1 +1 @@
-From f775386d92d68e534600fcff3fc4bcaa30d3e68c Mon Sep 17 00:00:00 2001
+From d30bef9034c7ebb54096d40326bcab0ed78e92d3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f775386d92d68e534600fcff3fc4bcaa30d3e68c ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index ab37c37469..008760e315 100644
+index 3ac65ca3b3..f4ec485d69 100644
@@ -23 +25 @@
-@@ -4314,11 +4314,6 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
+@@ -4305,11 +4305,6 @@ ixgbe_dev_link_update_share(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mvneta: fix possible out-of-bounds write' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (4 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/ixgbe: fix link status delay on FreeBSD' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'common/cnxk: fix double free of flow aging resources' " Xueming Li
` (90 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Chengwen Feng; +Cc: Xueming Li, Ferruh Yigit, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e26531c22509e629f3287cb7293e5c00d606d9fb
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e26531c22509e629f3287cb7293e5c00d606d9fb Mon Sep 17 00:00:00 2001
From: Chengwen Feng <fengchengwen@huawei.com>
Date: Thu, 10 Oct 2024 00:53:26 +0000
Subject: [PATCH] net/mvneta: fix possible out-of-bounds write
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c705c67d304b9450824a169b652520c2358c6aee ]
The mvneta_ifnames_get() function will save 'iface' value to ifnames,
it will out-of-bounds write if passed many iface pairs (e.g.
'iface=xxx,iface=xxx,...').
Fixes: 4ccc8d770d3b ("net/mvneta: add PMD skeleton")
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/mvneta/mvneta_ethdev.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/drivers/net/mvneta/mvneta_ethdev.c b/drivers/net/mvneta/mvneta_ethdev.c
index 212c300c14..7700a63071 100644
--- a/drivers/net/mvneta/mvneta_ethdev.c
+++ b/drivers/net/mvneta/mvneta_ethdev.c
@@ -91,6 +91,12 @@ mvneta_ifnames_get(const char *key __rte_unused, const char *value,
{
struct mvneta_ifnames *ifnames = extra_args;
+ if (ifnames->idx >= NETA_NUM_ETH_PPIO) {
+ MVNETA_LOG(ERR, "Too many ifnames specified (max %u)",
+ NETA_NUM_ETH_PPIO);
+ return -EINVAL;
+ }
+
ifnames->names[ifnames->idx++] = value;
return 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.346855123 +0800
+++ 0006-net-mvneta-fix-possible-out-of-bounds-write.patch 2024-12-06 23:26:43.843044829 +0800
@@ -1 +1 @@
-From c705c67d304b9450824a169b652520c2358c6aee Mon Sep 17 00:00:00 2001
+From e26531c22509e629f3287cb7293e5c00d606d9fb Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c705c67d304b9450824a169b652520c2358c6aee ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 3841c1ebe9..f99f9e6289 100644
+index 212c300c14..7700a63071 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix double free of flow aging resources' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (5 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mvneta: fix possible out-of-bounds write' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'crypto/openssl: fix 3DES-CTR with big endian CPUs' " Xueming Li
` (89 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Harman Kalra; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=53018798593ac88d8b364b32f5b85ad3ce42f8a8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 53018798593ac88d8b364b32f5b85ad3ce42f8a8 Mon Sep 17 00:00:00 2001
From: Harman Kalra <hkalra@marvell.com>
Date: Wed, 23 Oct 2024 20:31:37 +0530
Subject: [PATCH] common/cnxk: fix double free of flow aging resources
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d1066ea60bcb5cbd3cdcc06d21afc232d8c08407 ]
As part of NPC tear down sequence flow aging resources are
cleaned up while they were already cleaned up when last flow
with aging action was removed. This leads to double free of
some resources.
Fixes: 85e9542d4700 ("common/cnxk: fix flow aging cleanup")
Signed-off-by: Harman Kalra <hkalra@marvell.com>
---
drivers/common/cnxk/roc_npc.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_npc.c b/drivers/common/cnxk/roc_npc.c
index fcede1d0b7..9ea96a524c 100644
--- a/drivers/common/cnxk/roc_npc.c
+++ b/drivers/common/cnxk/roc_npc.c
@@ -351,7 +351,8 @@ roc_npc_fini(struct roc_npc *roc_npc)
struct npc *npc = roc_npc_to_npc_priv(roc_npc);
int rc;
- npc_aging_ctrl_thread_destroy(roc_npc);
+ if (!roc_npc->flow_age.aged_flows_get_thread_exit)
+ npc_aging_ctrl_thread_destroy(roc_npc);
rc = npc_flow_free_all_resources(npc);
if (rc) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.374040223 +0800
+++ 0007-common-cnxk-fix-double-free-of-flow-aging-resources.patch 2024-12-06 23:26:43.853044829 +0800
@@ -1 +1 @@
-From d1066ea60bcb5cbd3cdcc06d21afc232d8c08407 Mon Sep 17 00:00:00 2001
+From 53018798593ac88d8b364b32f5b85ad3ce42f8a8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d1066ea60bcb5cbd3cdcc06d21afc232d8c08407 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 8a951b6360..2b3c90683c 100644
+index fcede1d0b7..9ea96a524c 100644
@@ -23 +25 @@
-@@ -389,7 +389,8 @@ roc_npc_fini(struct roc_npc *roc_npc)
+@@ -351,7 +351,8 @@ roc_npc_fini(struct roc_npc *roc_npc)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/openssl: fix 3DES-CTR with big endian CPUs' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (6 preceding siblings ...)
2024-12-07 7:59 ` patch 'common/cnxk: fix double free of flow aging resources' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix trace script for multiple burst completion' " Xueming Li
` (88 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: David Marchand
Cc: Xueming Li, Morten Brørup, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e594df339a8fd538e82b506a6a2c36dd71228ad0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e594df339a8fd538e82b506a6a2c36dd71228ad0 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Fri, 25 Oct 2024 09:04:21 +0200
Subject: [PATCH] crypto/openssl: fix 3DES-CTR with big endian CPUs
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 97afd07ca79c7270480a65febd7f616a4c0b07ca ]
Caught by code review.
Don't byte swap unconditionally (assuming that CPU is little endian is
wrong). Instead, convert from big endian to cpu and vice versa.
Besides, avoid unaligned accesses and remove the ctr_inc helper that is
not used anywhere else.
Fixes: d61f70b4c918 ("crypto/libcrypto: add driver for OpenSSL library")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/crypto/openssl/rte_openssl_pmd.c | 28 ++++++++----------------
1 file changed, 9 insertions(+), 19 deletions(-)
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index e10a172f46..7538ae2953 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -2,6 +2,7 @@
* Copyright(c) 2016-2017 Intel Corporation
*/
+#include <rte_byteorder.h>
#include <rte_common.h>
#include <rte_hexdump.h>
#include <rte_cryptodev.h>
@@ -99,22 +100,6 @@ digest_name_get(enum rte_crypto_auth_algorithm algo)
static int cryptodev_openssl_remove(struct rte_vdev_device *vdev);
-/*----------------------------------------------------------------------------*/
-
-/**
- * Increment counter by 1
- * Counter is 64 bit array, big-endian
- */
-static void
-ctr_inc(uint8_t *ctr)
-{
- uint64_t *ctr64 = (uint64_t *)ctr;
-
- *ctr64 = __builtin_bswap64(*ctr64);
- (*ctr64)++;
- *ctr64 = __builtin_bswap64(*ctr64);
-}
-
/*
*------------------------------------------------------------------------------
* Session Prepare
@@ -1192,7 +1177,8 @@ static int
process_openssl_cipher_des3ctr(struct rte_mbuf *mbuf_src, uint8_t *dst,
int offset, uint8_t *iv, int srclen, EVP_CIPHER_CTX *ctx)
{
- uint8_t ebuf[8], ctr[8];
+ uint8_t ebuf[8];
+ uint64_t ctr;
int unused, n;
struct rte_mbuf *m;
uint8_t *src;
@@ -1208,15 +1194,19 @@ process_openssl_cipher_des3ctr(struct rte_mbuf *mbuf_src, uint8_t *dst,
src = rte_pktmbuf_mtod_offset(m, uint8_t *, offset);
l = rte_pktmbuf_data_len(m) - offset;
- memcpy(ctr, iv, 8);
+ memcpy(&ctr, iv, 8);
for (n = 0; n < srclen; n++) {
if (n % 8 == 0) {
+ uint64_t cpu_ctr;
+
if (EVP_EncryptUpdate(ctx,
(unsigned char *)&ebuf, &unused,
(const unsigned char *)&ctr, 8) <= 0)
goto process_cipher_des3ctr_err;
- ctr_inc(ctr);
+ cpu_ctr = rte_be_to_cpu_64(ctr);
+ cpu_ctr++;
+ ctr = rte_cpu_to_be_64(cpu_ctr);
}
dst[n] = *(src++) ^ ebuf[n % 8];
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.412295323 +0800
+++ 0008-crypto-openssl-fix-3DES-CTR-with-big-endian-CPUs.patch 2024-12-06 23:26:43.853044829 +0800
@@ -1 +1 @@
-From 97afd07ca79c7270480a65febd7f616a4c0b07ca Mon Sep 17 00:00:00 2001
+From e594df339a8fd538e82b506a6a2c36dd71228ad0 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 97afd07ca79c7270480a65febd7f616a4c0b07ca ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 9657b70c7a..0616383921 100644
+index e10a172f46..7538ae2953 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix trace script for multiple burst completion' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (7 preceding siblings ...)
2024-12-07 7:59 ` patch 'crypto/openssl: fix 3DES-CTR with big endian CPUs' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix real time counter reading from PCI BAR' " Xueming Li
` (87 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Viacheslav Ovsiienko; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=367c6c9c840e344a4f5086bd9934964aeb2678f2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 367c6c9c840e344a4f5086bd9934964aeb2678f2 Mon Sep 17 00:00:00 2001
From: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Date: Mon, 14 Oct 2024 11:04:31 +0300
Subject: [PATCH] net/mlx5: fix trace script for multiple burst completion
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d67215274063cb853b6c64e218a8b6a8025adc19 ]
In case if there were multiple bursts completed in the single
completion the first only burst was moved to the done list.
The situation is not typical, because usually tracing was
used for scheduled traffic debugging and for this case each
burst had its own completion requested, and there were no
completions with multiple bursts.
Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/tools/mlx5_trace.py | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/tools/mlx5_trace.py b/drivers/net/mlx5/tools/mlx5_trace.py
index 8c1fd0a350..67461520a9 100755
--- a/drivers/net/mlx5/tools/mlx5_trace.py
+++ b/drivers/net/mlx5/tools/mlx5_trace.py
@@ -258,13 +258,14 @@ def do_tx_complete(msg, trace):
if burst.comp(wqe_id, wqe_ts) == 0:
break
rmv += 1
- # mode completed burst to done list
+ # move completed burst(s) to done list
if rmv != 0:
idx = 0
while idx < rmv:
+ burst = queue.wait_burst[idx]
queue.done_burst.append(burst)
idx += 1
- del queue.wait_burst[0:rmv]
+ queue.wait_burst = queue.wait_burst[rmv:]
def do_tx(msg, trace):
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.449723422 +0800
+++ 0009-net-mlx5-fix-trace-script-for-multiple-burst-complet.patch 2024-12-06 23:26:43.853044829 +0800
@@ -1 +1 @@
-From d67215274063cb853b6c64e218a8b6a8025adc19 Mon Sep 17 00:00:00 2001
+From 367c6c9c840e344a4f5086bd9934964aeb2678f2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d67215274063cb853b6c64e218a8b6a8025adc19 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix real time counter reading from PCI BAR' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (8 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix trace script for multiple burst completion' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix Tx tracing to use single clock source' " Xueming Li
` (86 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Tim Martin; +Cc: Xueming Li, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9af26e2cde2f4fbe57c53faa678e4d92dc5c17c0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9af26e2cde2f4fbe57c53faa678e4d92dc5c17c0 Mon Sep 17 00:00:00 2001
From: Tim Martin <timothym@nvidia.com>
Date: Mon, 14 Oct 2024 11:04:32 +0300
Subject: [PATCH] net/mlx5: fix real time counter reading from PCI BAR
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 27918f0d53f482fa97f2a8dcd5792c23094abcec ]
There is the mlx5_txpp_read_clock() routine reading
the 64-bit real time counter from the device PCI BAR.
It introduced two issues:
- it checks the PCI BAR mapping into process address
space and tries to map this on demand. This might be
problematic if something goes wrong and mapping fails.
It happens on every read_clock API call, invokes kernel
taking a long time and causing application malfunction.
- the 64-bit counter should be read in single atomic
transaction
Fixes: 9b31fc9007f9 ("net/mlx5: fix read device clock in real time mode")
Signed-off-by: Tim Martin <timothym@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
.mailmap | 1 +
drivers/net/mlx5/mlx5.c | 4 ++++
drivers/net/mlx5/mlx5_tx.h | 34 +++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_txpp.c | 11 ++---------
4 files changed, 40 insertions(+), 10 deletions(-)
diff --git a/.mailmap b/.mailmap
index 8a0ce733c3..7990edf2d0 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1456,6 +1456,7 @@ Timmons C. Player <timmons.player@spirent.com>
Timothy McDaniel <timothy.mcdaniel@intel.com>
Timothy Miskell <timothy.miskell@intel.com>
Timothy Redaelli <tredaelli@redhat.com>
+Tim Martin <timothym@nvidia.com>
Tim Shearer <tim.shearer@overturenetworks.com>
Ting-Kai Ku <ting-kai.ku@intel.com>
Ting Xu <ting.xu@intel.com>
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 8d4a0a3dda..25182bce39 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -2164,6 +2164,7 @@ int
mlx5_proc_priv_init(struct rte_eth_dev *dev)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
struct mlx5_proc_priv *ppriv;
size_t ppriv_size;
@@ -2184,6 +2185,9 @@ mlx5_proc_priv_init(struct rte_eth_dev *dev)
dev->process_private = ppriv;
if (rte_eal_process_type() == RTE_PROC_PRIMARY)
priv->sh->pppriv = ppriv;
+ /* Check and try to map HCA PCI BAR to allow reading real time. */
+ if (sh->dev_cap.rt_timestamp && mlx5_dev_is_pci(dev->device))
+ mlx5_txpp_map_hca_bar(dev);
return 0;
}
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index e59ce37667..42fc7ba3b3 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -369,6 +369,38 @@ mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts)
return ci;
}
+/**
+ * Read real time clock counter directly from the device PCI BAR area.
+ * The PCI BAR must be mapped to the process memory space at initialization.
+ *
+ * @param dev
+ * Device to read clock counter from
+ *
+ * @return
+ * 0 - if HCA BAR is not supported or not mapped.
+ * !=0 - read 64-bit value of real-time in UTC formatv (nanoseconds)
+ */
+static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *dev)
+{
+ struct mlx5_proc_priv *ppriv = dev->process_private;
+
+ if (ppriv && ppriv->hca_bar) {
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_dev_ctx_shared *sh = priv->sh;
+ uint64_t *hca_ptr = (uint64_t *)(ppriv->hca_bar) +
+ __mlx5_64_off(initial_seg, real_time);
+ uint64_t __rte_atomic *ts_addr;
+ uint64_t ts;
+
+ ts_addr = (uint64_t __rte_atomic *)hca_ptr;
+ ts = rte_atomic_load_explicit(ts_addr, rte_memory_order_seq_cst);
+ ts = rte_be_to_cpu_64(ts);
+ ts = mlx5_txpp_convert_rx_ts(sh, ts);
+ return ts;
+ }
+ return 0;
+}
+
/**
* Set Software Parser flags and offsets in Ethernet Segment of WQE.
* Flags must be preliminary initialized to zero.
@@ -819,7 +851,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR <<
MLX5_COMP_MODE_OFFSET);
cs->misc = RTE_BE32(0);
- if (__rte_trace_point_fp_is_enabled() && !loc->pkts_sent)
+ if (__rte_trace_point_fp_is_enabled())
rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx);
rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode);
}
diff --git a/drivers/net/mlx5/mlx5_txpp.c b/drivers/net/mlx5/mlx5_txpp.c
index 5a5df2d1bb..0184060c3f 100644
--- a/drivers/net/mlx5/mlx5_txpp.c
+++ b/drivers/net/mlx5/mlx5_txpp.c
@@ -971,7 +971,6 @@ mlx5_txpp_read_clock(struct rte_eth_dev *dev, uint64_t *timestamp)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_ctx_shared *sh = priv->sh;
- struct mlx5_proc_priv *ppriv;
uint64_t ts;
int ret;
@@ -997,15 +996,9 @@ mlx5_txpp_read_clock(struct rte_eth_dev *dev, uint64_t *timestamp)
*timestamp = ts;
return 0;
}
- /* Check and try to map HCA PIC BAR to allow reading real time. */
- ppriv = dev->process_private;
- if (ppriv && !ppriv->hca_bar &&
- sh->dev_cap.rt_timestamp && mlx5_dev_is_pci(dev->device))
- mlx5_txpp_map_hca_bar(dev);
/* Check if we can read timestamp directly from hardware. */
- if (ppriv && ppriv->hca_bar) {
- ts = MLX5_GET64(initial_seg, ppriv->hca_bar, real_time);
- ts = mlx5_txpp_convert_rx_ts(sh, ts);
+ ts = mlx5_read_pcibar_clock(dev);
+ if (ts != 0) {
*timestamp = ts;
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.479596122 +0800
+++ 0010-net-mlx5-fix-real-time-counter-reading-from-PCI-BAR.patch 2024-12-06 23:26:43.853044829 +0800
@@ -1 +1 @@
-From 27918f0d53f482fa97f2a8dcd5792c23094abcec Mon Sep 17 00:00:00 2001
+From 9af26e2cde2f4fbe57c53faa678e4d92dc5c17c0 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 27918f0d53f482fa97f2a8dcd5792c23094abcec ]
@@ -20 +22,0 @@
-Cc: stable@dpdk.org
@@ -32 +34 @@
-index 5290420258..504c390f0f 100644
+index 8a0ce733c3..7990edf2d0 100644
@@ -35 +37 @@
-@@ -1523,6 +1523,7 @@ Timmons C. Player <timmons.player@spirent.com>
+@@ -1456,6 +1456,7 @@ Timmons C. Player <timmons.player@spirent.com>
@@ -44 +46 @@
-index e36fa651a1..52b90e6ff3 100644
+index 8d4a0a3dda..25182bce39 100644
@@ -47 +49 @@
-@@ -2242,6 +2242,7 @@ int
+@@ -2164,6 +2164,7 @@ int
@@ -55 +57 @@
-@@ -2262,6 +2263,9 @@ mlx5_proc_priv_init(struct rte_eth_dev *dev)
+@@ -2184,6 +2185,9 @@ mlx5_proc_priv_init(struct rte_eth_dev *dev)
@@ -66 +68 @@
-index 983913faa2..587e6a9f7d 100644
+index e59ce37667..42fc7ba3b3 100644
@@ -69 +71 @@
-@@ -372,6 +372,38 @@ mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts)
+@@ -369,6 +369,38 @@ mlx5_txpp_convert_tx_ts(struct mlx5_dev_ctx_shared *sh, uint64_t mts)
@@ -108 +110 @@
-@@ -822,7 +854,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
+@@ -819,7 +851,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
@@ -118 +120 @@
-index 4e26fa2db8..e6d3ad83e9 100644
+index 5a5df2d1bb..0184060c3f 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix Tx tracing to use single clock source' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (9 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix real time counter reading from PCI BAR' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'eal/unix: optimize thread creation' " Xueming Li
` (85 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Tim Martin; +Cc: Xueming Li, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=53b225e96b9cd6c6d952ada69ca9553b3c7d9f2d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 53b225e96b9cd6c6d952ada69ca9553b3c7d9f2d Mon Sep 17 00:00:00 2001
From: Tim Martin <timothym@nvidia.com>
Date: Mon, 14 Oct 2024 11:04:33 +0300
Subject: [PATCH] net/mlx5: fix Tx tracing to use single clock source
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 02932480ae82d7ed3c207f02cc40b508cdda6ded ]
A prior commit introduced tracing for mlx5, but there is a mixture of
two unrelated clocks used: the TSC for host work submission timestamps
and the NIC HW clock for CQE completion times. It is necessary to have
timestamps from a single common clock, and the NIC HW clock is the
better choice since it can be used with externally synchronized clocks.
This patch adds the NIC HW clock as an additional logged parameter for
trace_tx_entry, trace_tx_exit, and trace_tx_wqe. The included trace
analysis python script is also updated to use the new clock when
it is available.
Fixes: a1e910f5b8d4 ("net/mlx5: introduce tracepoints")
Fixes: 9725191a7e14 ("net/mlx5: add Tx datapath trace analyzing script")
Signed-off-by: Tim Martin <timothym@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_trace.h | 9 ++++++---
drivers/net/mlx5/mlx5_tx.h | 21 +++++++++++++++++----
drivers/net/mlx5/tools/mlx5_trace.py | 12 +++++++++---
3 files changed, 32 insertions(+), 10 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_trace.h b/drivers/net/mlx5/mlx5_trace.h
index 888d96f60b..656dbb1a4f 100644
--- a/drivers/net/mlx5/mlx5_trace.h
+++ b/drivers/net/mlx5/mlx5_trace.h
@@ -22,21 +22,24 @@ extern "C" {
/* TX burst subroutines trace points. */
RTE_TRACE_POINT_FP(
rte_pmd_mlx5_trace_tx_entry,
- RTE_TRACE_POINT_ARGS(uint16_t port_id, uint16_t queue_id),
+ RTE_TRACE_POINT_ARGS(uint64_t real_time, uint16_t port_id, uint16_t queue_id),
+ rte_trace_point_emit_u64(real_time);
rte_trace_point_emit_u16(port_id);
rte_trace_point_emit_u16(queue_id);
)
RTE_TRACE_POINT_FP(
rte_pmd_mlx5_trace_tx_exit,
- RTE_TRACE_POINT_ARGS(uint16_t nb_sent, uint16_t nb_req),
+ RTE_TRACE_POINT_ARGS(uint64_t real_time, uint16_t nb_sent, uint16_t nb_req),
+ rte_trace_point_emit_u64(real_time);
rte_trace_point_emit_u16(nb_sent);
rte_trace_point_emit_u16(nb_req);
)
RTE_TRACE_POINT_FP(
rte_pmd_mlx5_trace_tx_wqe,
- RTE_TRACE_POINT_ARGS(uint32_t opcode),
+ RTE_TRACE_POINT_ARGS(uint64_t real_time, uint32_t opcode),
+ rte_trace_point_emit_u64(real_time);
rte_trace_point_emit_u32(opcode);
)
diff --git a/drivers/net/mlx5/mlx5_tx.h b/drivers/net/mlx5/mlx5_tx.h
index 42fc7ba3b3..46559426fe 100644
--- a/drivers/net/mlx5/mlx5_tx.h
+++ b/drivers/net/mlx5/mlx5_tx.h
@@ -401,6 +401,14 @@ static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *d
return 0;
}
+static __rte_always_inline uint64_t mlx5_read_pcibar_clock_from_txq(struct mlx5_txq_data *txq)
+{
+ struct mlx5_txq_ctrl *txq_ctrl = container_of(txq, struct mlx5_txq_ctrl, txq);
+ struct rte_eth_dev *dev = ETH_DEV(txq_ctrl->priv);
+
+ return mlx5_read_pcibar_clock(dev);
+}
+
/**
* Set Software Parser flags and offsets in Ethernet Segment of WQE.
* Flags must be preliminary initialized to zero.
@@ -838,6 +846,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
unsigned int olx)
{
struct mlx5_wqe_cseg *__rte_restrict cs = &wqe->cseg;
+ uint64_t real_time;
/* For legacy MPW replace the EMPW by TSO with modifier. */
if (MLX5_TXOFF_CONFIG(MPW) && opcode == MLX5_OPCODE_ENHANCED_MPSW)
@@ -851,9 +860,12 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
cs->flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR <<
MLX5_COMP_MODE_OFFSET);
cs->misc = RTE_BE32(0);
- if (__rte_trace_point_fp_is_enabled())
- rte_pmd_mlx5_trace_tx_entry(txq->port_id, txq->idx);
- rte_pmd_mlx5_trace_tx_wqe((txq->wqe_ci << 8) | opcode);
+ if (__rte_trace_point_fp_is_enabled()) {
+ real_time = mlx5_read_pcibar_clock_from_txq(txq);
+ if (!loc->pkts_sent)
+ rte_pmd_mlx5_trace_tx_entry(real_time, txq->port_id, txq->idx);
+ rte_pmd_mlx5_trace_tx_wqe(real_time, (txq->wqe_ci << 8) | opcode);
+ }
}
/**
@@ -3815,7 +3827,8 @@ burst_exit:
__mlx5_tx_free_mbuf(txq, pkts, loc.mbuf_free, olx);
/* Trace productive bursts only. */
if (__rte_trace_point_fp_is_enabled() && loc.pkts_sent)
- rte_pmd_mlx5_trace_tx_exit(loc.pkts_sent, pkts_n);
+ rte_pmd_mlx5_trace_tx_exit(mlx5_read_pcibar_clock_from_txq(txq),
+ loc.pkts_sent, pkts_n);
return loc.pkts_sent;
}
diff --git a/drivers/net/mlx5/tools/mlx5_trace.py b/drivers/net/mlx5/tools/mlx5_trace.py
index 67461520a9..5eb634a490 100755
--- a/drivers/net/mlx5/tools/mlx5_trace.py
+++ b/drivers/net/mlx5/tools/mlx5_trace.py
@@ -174,7 +174,9 @@ def do_tx_entry(msg, trace):
return
# allocate the new burst and append to the queue
burst = MlxBurst()
- burst.call_ts = msg.default_clock_snapshot.ns_from_origin
+ burst.call_ts = event["real_time"]
+ if burst.call_ts == 0:
+ burst.call_ts = msg.default_clock_snapshot.ns_from_origin
trace.tx_blst[cpu_id] = burst
pq_id = event["port_id"] << 16 | event["queue_id"]
queue = trace.tx_qlst.get(pq_id)
@@ -194,7 +196,9 @@ def do_tx_exit(msg, trace):
burst = trace.tx_blst.get(cpu_id)
if burst is None:
return
- burst.done_ts = msg.default_clock_snapshot.ns_from_origin
+ burst.done_ts = event["real_time"]
+ if burst.done_ts == 0:
+ burst.done_ts = msg.default_clock_snapshot.ns_from_origin
burst.req = event["nb_req"]
burst.done = event["nb_sent"]
trace.tx_blst.pop(cpu_id)
@@ -210,7 +214,9 @@ def do_tx_wqe(msg, trace):
wqe = MlxWqe()
wqe.wait_ts = trace.tx_wlst.get(cpu_id)
if wqe.wait_ts is None:
- wqe.wait_ts = msg.default_clock_snapshot.ns_from_origin
+ wqe.wait_ts = event["real_time"]
+ if wqe.wait_ts == 0:
+ wqe.wait_ts = msg.default_clock_snapshot.ns_from_origin
wqe.opcode = event["opcode"]
burst.wqes.append(wqe)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.517307221 +0800
+++ 0011-net-mlx5-fix-Tx-tracing-to-use-single-clock-source.patch 2024-12-06 23:26:43.863044829 +0800
@@ -1 +1 @@
-From 02932480ae82d7ed3c207f02cc40b508cdda6ded Mon Sep 17 00:00:00 2001
+From 53b225e96b9cd6c6d952ada69ca9553b3c7d9f2d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 02932480ae82d7ed3c207f02cc40b508cdda6ded ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -30 +32 @@
-index a8f0b372c8..4fc3584acc 100644
+index 888d96f60b..656dbb1a4f 100644
@@ -62 +64 @@
-index 587e6a9f7d..55568c41b1 100644
+index 42fc7ba3b3..46559426fe 100644
@@ -65 +67 @@
-@@ -404,6 +404,14 @@ static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *d
+@@ -401,6 +401,14 @@ static __rte_always_inline uint64_t mlx5_read_pcibar_clock(struct rte_eth_dev *d
@@ -80 +82 @@
-@@ -841,6 +849,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
+@@ -838,6 +846,7 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
@@ -88 +90 @@
-@@ -854,9 +863,12 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
+@@ -851,9 +860,12 @@ mlx5_tx_cseg_init(struct mlx5_txq_data *__rte_restrict txq,
@@ -104 +106 @@
-@@ -3818,7 +3830,8 @@ burst_exit:
+@@ -3815,7 +3827,8 @@ burst_exit:
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'eal/unix: optimize thread creation' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (10 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix Tx tracing to use single clock source' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-09 7:00 ` David Marchand
2024-12-07 7:59 ` patch 'net/mlx5: fix memory leak in metering' " Xueming Li
` (84 subsequent siblings)
96 siblings, 1 reply; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: David Marchand
Cc: Xueming Li, Luca Boccassi, Stephen Hemminger, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=64ea25a3a4316914d15d09ec9c695cf50f009ba5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 64ea25a3a4316914d15d09ec9c695cf50f009ba5 Mon Sep 17 00:00:00 2001
From: David Marchand <david.marchand@redhat.com>
Date: Sat, 2 Nov 2024 10:38:16 +0100
Subject: [PATCH] eal/unix: optimize thread creation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 64f27886b8bf127cd365a8a3ed5c05852a5ae81d ]
Setting the cpu affinity of the child thread from the parent thread is
racy when using pthread_setaffinity_np, as the child thread may start
running and initialize before affinity is set.
On the other hand, setting the cpu affinity from the child thread itself
may fail, so the parent thread waits for the child thread to report
whether this call succeeded.
This synchronisation point resulted in a significant slow down of
rte_thread_create() (as seen in the lcores_autotest unit tests, in OBS
for some ARM systems).
Another option for setting cpu affinity is to use the not portable
pthread_attr_setaffinity_np available in FreeBSD and glibc,
but not available in musl.
Fixes: b28c6196b132 ("eal/unix: fix thread creation")
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
lib/eal/unix/meson.build | 5 +++++
lib/eal/unix/rte_thread.c | 25 +++++++++++++++++++++++++
2 files changed, 30 insertions(+)
diff --git a/lib/eal/unix/meson.build b/lib/eal/unix/meson.build
index cc7d67dd32..f1eb82e16a 100644
--- a/lib/eal/unix/meson.build
+++ b/lib/eal/unix/meson.build
@@ -11,3 +11,8 @@ sources += files(
'eal_unix_timer.c',
'rte_thread.c',
)
+
+if is_freebsd or cc.has_function('pthread_attr_setaffinity_np', args: '-D_GNU_SOURCE',
+ prefix : '#include <pthread.h>')
+ cflags += '-DRTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP'
+endif
diff --git a/lib/eal/unix/rte_thread.c b/lib/eal/unix/rte_thread.c
index 36a21ab2f9..7eb9098254 100644
--- a/lib/eal/unix/rte_thread.c
+++ b/lib/eal/unix/rte_thread.c
@@ -17,6 +17,7 @@ struct eal_tls_key {
pthread_key_t thread_index;
};
+#ifndef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
struct thread_start_context {
rte_thread_func thread_func;
void *thread_args;
@@ -26,6 +27,7 @@ struct thread_start_context {
int wrapper_ret;
bool wrapper_done;
};
+#endif
static int
thread_map_priority_to_os_value(enum rte_thread_priority eal_pri, int *os_pri,
@@ -86,6 +88,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri,
return 0;
}
+#ifndef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
static void *
thread_start_wrapper(void *arg)
{
@@ -111,6 +114,7 @@ thread_start_wrapper(void *arg)
return (void *)(uintptr_t)thread_func(thread_args);
}
+#endif
int
rte_thread_create(rte_thread_t *thread_id,
@@ -124,6 +128,7 @@ rte_thread_create(rte_thread_t *thread_id,
.sched_priority = 0,
};
int policy = SCHED_OTHER;
+#ifndef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
struct thread_start_context ctx = {
.thread_func = thread_func,
.thread_args = args,
@@ -132,6 +137,7 @@ rte_thread_create(rte_thread_t *thread_id,
.wrapper_mutex = PTHREAD_MUTEX_INITIALIZER,
.wrapper_cond = PTHREAD_COND_INITIALIZER,
};
+#endif
if (thread_attr != NULL) {
ret = pthread_attr_init(&attr);
@@ -142,6 +148,16 @@ rte_thread_create(rte_thread_t *thread_id,
attrp = &attr;
+#ifdef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
+ if (CPU_COUNT(&thread_attr->cpuset) > 0) {
+ ret = pthread_attr_setaffinity_np(attrp, sizeof(thread_attr->cpuset),
+ &thread_attr->cpuset);
+ if (ret != 0) {
+ RTE_LOG(DEBUG, EAL, "pthread_attr_setaffinity_np failed\n");
+ goto cleanup;
+ }
+ }
+#endif
/*
* Set the inherit scheduler parameter to explicit,
* otherwise the priority attribute is ignored.
@@ -176,6 +192,14 @@ rte_thread_create(rte_thread_t *thread_id,
}
}
+#ifdef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
+ ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
+ (void *)(void *)thread_func, args);
+ if (ret != 0) {
+ RTE_LOG(DEBUG, EAL, "pthread_create failed");
+ goto cleanup;
+ }
+#else /* !RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP */
ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
thread_start_wrapper, &ctx);
if (ret != 0) {
@@ -191,6 +215,7 @@ rte_thread_create(rte_thread_t *thread_id,
if (ret != 0)
rte_thread_join(*thread_id, NULL);
+#endif /* RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP */
cleanup:
if (attrp != NULL)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.552623621 +0800
+++ 0012-eal-unix-optimize-thread-creation.patch 2024-12-06 23:26:43.863044829 +0800
@@ -1 +1 @@
-From 64f27886b8bf127cd365a8a3ed5c05852a5ae81d Mon Sep 17 00:00:00 2001
+From 64ea25a3a4316914d15d09ec9c695cf50f009ba5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 64f27886b8bf127cd365a8a3ed5c05852a5ae81d ]
@@ -23 +25,0 @@
-Cc: stable@dpdk.org
@@ -48 +50 @@
-index 1b4c73f58e..ea629c2065 100644
+index 36a21ab2f9..7eb9098254 100644
@@ -51 +53 @@
-@@ -19,6 +19,7 @@ struct eal_tls_key {
+@@ -17,6 +17,7 @@ struct eal_tls_key {
@@ -59 +61 @@
-@@ -28,6 +29,7 @@ struct thread_start_context {
+@@ -26,6 +27,7 @@ struct thread_start_context {
@@ -67 +69 @@
-@@ -88,6 +90,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri,
+@@ -86,6 +88,7 @@ thread_map_os_priority_to_eal_priority(int policy, int os_pri,
@@ -75 +77 @@
-@@ -113,6 +116,7 @@ thread_start_wrapper(void *arg)
+@@ -111,6 +114,7 @@ thread_start_wrapper(void *arg)
@@ -83 +85 @@
-@@ -126,6 +130,7 @@ rte_thread_create(rte_thread_t *thread_id,
+@@ -124,6 +128,7 @@ rte_thread_create(rte_thread_t *thread_id,
@@ -91 +93 @@
-@@ -134,6 +139,7 @@ rte_thread_create(rte_thread_t *thread_id,
+@@ -132,6 +137,7 @@ rte_thread_create(rte_thread_t *thread_id,
@@ -99 +101 @@
-@@ -144,6 +150,16 @@ rte_thread_create(rte_thread_t *thread_id,
+@@ -142,6 +148,16 @@ rte_thread_create(rte_thread_t *thread_id,
@@ -108 +110 @@
-+ EAL_LOG(DEBUG, "pthread_attr_setaffinity_np failed");
++ RTE_LOG(DEBUG, EAL, "pthread_attr_setaffinity_np failed\n");
@@ -116 +118 @@
-@@ -178,6 +194,14 @@ rte_thread_create(rte_thread_t *thread_id,
+@@ -176,6 +192,14 @@ rte_thread_create(rte_thread_t *thread_id,
@@ -124 +126 @@
-+ EAL_LOG(DEBUG, "pthread_create failed");
++ RTE_LOG(DEBUG, EAL, "pthread_create failed");
@@ -131 +133 @@
-@@ -193,6 +217,7 @@ rte_thread_create(rte_thread_t *thread_id,
+@@ -191,6 +215,7 @@ rte_thread_create(rte_thread_t *thread_id,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix memory leak in metering' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (11 preceding siblings ...)
2024-12-07 7:59 ` patch 'eal/unix: optimize thread creation' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix GRE flow item translation for root table' " Xueming Li
` (83 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Shun Hao; +Cc: Xueming Li, Bing Zhao, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4e2de6fbed81e68f9a92807bfbb259482b6973ba
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4e2de6fbed81e68f9a92807bfbb259482b6973ba Mon Sep 17 00:00:00 2001
From: Shun Hao <shunh@nvidia.com>
Date: Wed, 23 Oct 2024 09:22:15 +0300
Subject: [PATCH] net/mlx5: fix memory leak in metering
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4dd46d38820e0bf5e74f99b84f4b098d1b7220dd ]
Avoid allocating memory for meter profile table when meter is not
enabled. This memory was not being freed in the close process when
meter was disabled, potentially causing a leak.
Fixes: a295c69a8b24 ("net/mlx5: optimize meter profile lookup")
Signed-off-by: Shun Hao <shunh@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
---
drivers/net/mlx5/linux/mlx5_os.c | 8 +++++---
drivers/net/mlx5/mlx5_flow_meter.c | 4 ++--
drivers/net/mlx5/windows/mlx5_os.c | 8 +++++---
3 files changed, 12 insertions(+), 8 deletions(-)
diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c
index 2241e84341..9dcdc8581a 100644
--- a/drivers/net/mlx5/linux/mlx5_os.c
+++ b/drivers/net/mlx5/linux/mlx5_os.c
@@ -1570,9 +1570,11 @@ err_secondary:
priv->ctrl_flows = 0;
rte_spinlock_init(&priv->flow_list_lock);
TAILQ_INIT(&priv->flow_meters);
- priv->mtr_profile_tbl = mlx5_l3t_create(MLX5_L3T_TYPE_PTR);
- if (!priv->mtr_profile_tbl)
- goto error;
+ if (priv->mtr_en) {
+ priv->mtr_profile_tbl = mlx5_l3t_create(MLX5_L3T_TYPE_PTR);
+ if (!priv->mtr_profile_tbl)
+ goto error;
+ }
/* Bring Ethernet device up. */
DRV_LOG(DEBUG, "port %u forcing Ethernet interface up",
eth_dev->data->port_id);
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index 7bf5018c70..a4d954cb1f 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -100,8 +100,8 @@ mlx5_flow_meter_profile_find(struct mlx5_priv *priv, uint32_t meter_profile_id)
if (priv->mtr_profile_arr)
return &priv->mtr_profile_arr[meter_profile_id];
- if (mlx5_l3t_get_entry(priv->mtr_profile_tbl,
- meter_profile_id, &data) || !data.ptr)
+ if (!priv->mtr_profile_tbl ||
+ mlx5_l3t_get_entry(priv->mtr_profile_tbl, meter_profile_id, &data) || !data.ptr)
return NULL;
fmp = data.ptr;
/* Remove reference taken by the mlx5_l3t_get_entry. */
diff --git a/drivers/net/mlx5/windows/mlx5_os.c b/drivers/net/mlx5/windows/mlx5_os.c
index b731bdff06..a9614b125b 100644
--- a/drivers/net/mlx5/windows/mlx5_os.c
+++ b/drivers/net/mlx5/windows/mlx5_os.c
@@ -518,9 +518,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
claim_zero(mlx5_mac_addr_add(eth_dev, &mac, 0, 0));
priv->ctrl_flows = 0;
TAILQ_INIT(&priv->flow_meters);
- priv->mtr_profile_tbl = mlx5_l3t_create(MLX5_L3T_TYPE_PTR);
- if (!priv->mtr_profile_tbl)
- goto error;
+ if (priv->mtr_en) {
+ priv->mtr_profile_tbl = mlx5_l3t_create(MLX5_L3T_TYPE_PTR);
+ if (!priv->mtr_profile_tbl)
+ goto error;
+ }
/* Bring Ethernet device up. */
DRV_LOG(DEBUG, "port %u forcing Ethernet interface up.",
eth_dev->data->port_id);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.581205921 +0800
+++ 0013-net-mlx5-fix-memory-leak-in-metering.patch 2024-12-06 23:26:43.863044829 +0800
@@ -1 +1 @@
-From 4dd46d38820e0bf5e74f99b84f4b098d1b7220dd Mon Sep 17 00:00:00 2001
+From 4e2de6fbed81e68f9a92807bfbb259482b6973ba Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4dd46d38820e0bf5e74f99b84f4b098d1b7220dd ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 7d0d4bf23b..69a80b9ddc 100644
+index 2241e84341..9dcdc8581a 100644
@@ -25 +27 @@
-@@ -1612,9 +1612,11 @@ err_secondary:
+@@ -1570,9 +1570,11 @@ err_secondary:
@@ -41 +43 @@
-index 19d8607070..98a61cbdd4 100644
+index 7bf5018c70..a4d954cb1f 100644
@@ -44 +46 @@
-@@ -378,8 +378,8 @@ mlx5_flow_meter_profile_find(struct mlx5_priv *priv, uint32_t meter_profile_id)
+@@ -100,8 +100,8 @@ mlx5_flow_meter_profile_find(struct mlx5_priv *priv, uint32_t meter_profile_id)
@@ -56 +58 @@
-index 80f1679388..268598f209 100644
+index b731bdff06..a9614b125b 100644
@@ -59 +61 @@
-@@ -521,9 +521,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
+@@ -518,9 +518,11 @@ mlx5_dev_spawn(struct rte_device *dpdk_dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix GRE flow item translation for root table' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (12 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix memory leak in metering' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5/hws: fix range definer error recovery' " Xueming Li
` (82 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Xueming Li, Suanming Mou, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e75c93ede016324bcb760712717bb784c4018a88
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e75c93ede016324bcb760712717bb784c4018a88 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Sun, 27 Oct 2024 14:39:16 +0200
Subject: [PATCH] net/mlx5: fix GRE flow item translation for root table
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 25ab2cbba31d937e685f0cf9ecce0c680cc4083e ]
Flow items translations for the root tables reuse DV code. However,
the DV GRE item translation did not initialize the item mask for HWS
template. Initialize the mask to fix GRE item translation for root
tables when using HWS.
Fixes: cd4ab742064a ("net/mlx5: split flow item matcher and value translation")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Suanming Mou <suanmingm@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 23 ++++++++++++-----------
1 file changed, 12 insertions(+), 11 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index b447b1598a..09c7068339 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9476,22 +9476,23 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
} gre_crks_rsvd0_ver_m, gre_crks_rsvd0_ver_v;
uint16_t protocol_m, protocol_v;
- if (key_type & MLX5_SET_MATCHER_M)
+ if (key_type & MLX5_SET_MATCHER_M) {
MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol, 0xff);
- else
+ if (!gre_m)
+ gre_m = &rte_flow_item_gre_mask;
+ gre_v = gre_m;
+ } else {
MLX5_SET(fte_match_set_lyr_2_4, headers_v, ip_protocol,
IPPROTO_GRE);
- if (!gre_v) {
- gre_v = &empty_gre;
- gre_m = &empty_gre;
- } else {
- if (!gre_m)
+ if (!gre_v) {
+ gre_v = &empty_gre;
+ gre_m = &empty_gre;
+ } else if (!gre_m) {
gre_m = &rte_flow_item_gre_mask;
+ }
+ if (key_type == MLX5_SET_MATCHER_HS_V)
+ gre_m = gre_v;
}
- if (key_type & MLX5_SET_MATCHER_M)
- gre_v = gre_m;
- else if (key_type == MLX5_SET_MATCHER_HS_V)
- gre_m = gre_v;
gre_crks_rsvd0_ver_m.value = rte_be_to_cpu_16(gre_m->c_rsvd0_ver);
gre_crks_rsvd0_ver_v.value = rte_be_to_cpu_16(gre_v->c_rsvd0_ver);
MLX5_SET(fte_match_set_misc, misc_v, gre_c_present,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.614033120 +0800
+++ 0014-net-mlx5-fix-GRE-flow-item-translation-for-root-tabl.patch 2024-12-06 23:26:43.873044829 +0800
@@ -1 +1 @@
-From 25ab2cbba31d937e685f0cf9ecce0c680cc4083e Mon Sep 17 00:00:00 2001
+From e75c93ede016324bcb760712717bb784c4018a88 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 25ab2cbba31d937e685f0cf9ecce0c680cc4083e ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 040727f2e8..dc5263ace3 100644
+index b447b1598a..09c7068339 100644
@@ -24 +26 @@
-@@ -9830,22 +9830,23 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
+@@ -9476,22 +9476,23 @@ flow_dv_translate_item_gre(void *key, const struct rte_flow_item *item,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5/hws: fix range definer error recovery' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (13 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix GRE flow item translation for root table' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix SQ flow item size' " Xueming Li
` (81 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=40ea0017dda717772ee92e3d17c277425d18b64d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 40ea0017dda717772ee92e3d17c277425d18b64d Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Sun, 27 Oct 2024 15:07:23 +0200
Subject: [PATCH] net/mlx5/hws: fix range definer error recovery
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 84c3090e517641027a7b64fe5bb6eccbcfa05a6d ]
Assign EINVAL to rte_errno when an invalid matcher range definition
is detected. This ensures the calling function is properly notified
about the error condition, which was previously not being reported.
Fixes: 9732ffe13bd6 ("net/mlx5/hws: add range definer creation")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr_definer.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/mlx5/hws/mlx5dr_definer.c b/drivers/net/mlx5/hws/mlx5dr_definer.c
index daee2b6eb7..ef437a6dbd 100644
--- a/drivers/net/mlx5/hws/mlx5dr_definer.c
+++ b/drivers/net/mlx5/hws/mlx5dr_definer.c
@@ -3221,6 +3221,7 @@ mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx,
if (i && ((is_range && !has_range) || (!is_range && has_range))) {
DR_LOG(ERR, "Using range and non range templates is not allowed");
+ rte_errno = EINVAL;
goto free_definers;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.658157620 +0800
+++ 0015-net-mlx5-hws-fix-range-definer-error-recovery.patch 2024-12-06 23:26:43.873044829 +0800
@@ -1 +1 @@
-From 84c3090e517641027a7b64fe5bb6eccbcfa05a6d Mon Sep 17 00:00:00 2001
+From 40ea0017dda717772ee92e3d17c277425d18b64d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 84c3090e517641027a7b64fe5bb6eccbcfa05a6d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 5c2e889444..5260830d9b 100644
+index daee2b6eb7..ef437a6dbd 100644
@@ -23 +25 @@
-@@ -4058,6 +4058,7 @@ mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx,
+@@ -3221,6 +3221,7 @@ mlx5dr_definer_matcher_range_init(struct mlx5dr_context *ctx,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix SQ flow item size' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (14 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5/hws: fix range definer error recovery' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix non-template flow action validation' " Xueming Li
` (80 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5599cdc9f1b9c33585531c4d0c4e0a99219fae1f
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5599cdc9f1b9c33585531c4d0c4e0a99219fae1f Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Sun, 27 Oct 2024 16:09:40 +0200
Subject: [PATCH] net/mlx5: fix SQ flow item size
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7c66fa49ddcce1981c2fa3a0c024ec82b036639c ]
Expand the size of struct mlx5_rte_flow_item_sq to 64 bits on 64-bit
systems. This aligns with DPDK's assumption that PMD private data has
pointer size when copying flow items with rte_flow_conv.[1]
Previously, the struct was defined as 32 bits, causing DPDK to
incorrectly assign an additional 32 bits when copying
MLX5_RTE_FLOW_ITEM_TYPE_SQ items on 64-bit systems.
This fix ensures proper memory alignment and prevents potential
buffer overflows when DPDK copies MLX5_RTE_FLOW_ITEM_TYPE_SQ items.
[1]:
commit 6cf72047332b ("ethdev: support flow elements with variable length")
Fixes: 75a00812b18f ("net/mlx5: add hardware steering item translation")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.h | 3 +++
1 file changed, 3 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index bde7dc43a8..e6b0c1bb80 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -157,6 +157,9 @@ struct mlx5_flow_action_copy_mreg {
/* Matches on source queue. */
struct mlx5_rte_flow_item_sq {
uint32_t queue; /* DevX SQ number */
+#ifdef RTE_ARCH_64
+ uint32_t reserved;
+#endif
};
/* Feature name to allocate metadata register. */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.688173719 +0800
+++ 0016-net-mlx5-fix-SQ-flow-item-size.patch 2024-12-06 23:26:43.883044829 +0800
@@ -1 +1 @@
-From 7c66fa49ddcce1981c2fa3a0c024ec82b036639c Mon Sep 17 00:00:00 2001
+From 5599cdc9f1b9c33585531c4d0c4e0a99219fae1f Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7c66fa49ddcce1981c2fa3a0c024ec82b036639c ]
@@ -21 +23,0 @@
-Cc: stable@dpdk.org
@@ -30 +32 @@
-index 9a8eccdd25..f5866af231 100644
+index bde7dc43a8..e6b0c1bb80 100644
@@ -33 +35 @@
-@@ -168,6 +168,9 @@ struct mlx5_flow_action_copy_mreg {
+@@ -157,6 +157,9 @@ struct mlx5_flow_action_copy_mreg {
@@ -42 +44 @@
- /* Map from registers to modify fields. */
+ /* Feature name to allocate metadata register. */
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix non-template flow action validation' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (15 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix SQ flow item size' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix SWS meter state initialization' " Xueming Li
` (79 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f782c65a9f7bc4797770b9688bb9ddcac21cf3bf
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f782c65a9f7bc4797770b9688bb9ddcac21cf3bf Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Sun, 27 Oct 2024 16:36:59 +0200
Subject: [PATCH] net/mlx5: fix non-template flow action validation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ee76b173b2e93ab4a0c9b4153191965259dae972 ]
Remove HWS actions template validation for non-template API setup.
The non-template API does not have action masks, and action templates
are created internally by the PMD. The existing validation was
incorrectly checking action parameters in non-existent masks.
This change ensures that the HWS actions template validation is only
performed for the template API, where action masks are available and
the validation is meaningful.
Fixes: 04a4de756e14 ("net/mlx5: support flow age action with HWS")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index ca2611942e..5c611b03b9 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -6526,8 +6526,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
uint32_t expand_mf_num = 0;
uint16_t src_off[MLX5_HW_MAX_ACTS] = {0, };
- if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks,
- &action_flags, error))
+ if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks, &action_flags, error))
return NULL;
for (i = 0; ra[i].type != RTE_FLOW_ACTION_TYPE_END; ++i) {
switch (ra[i].type) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.719138519 +0800
+++ 0017-net-mlx5-fix-non-template-flow-action-validation.patch 2024-12-06 23:26:43.883044829 +0800
@@ -1 +1 @@
-From ee76b173b2e93ab4a0c9b4153191965259dae972 Mon Sep 17 00:00:00 2001
+From f782c65a9f7bc4797770b9688bb9ddcac21cf3bf Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ee76b173b2e93ab4a0c9b4153191965259dae972 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -21,2 +23,2 @@
- drivers/net/mlx5/mlx5_flow_hw.c | 4 ++--
- 1 file changed, 2 insertions(+), 2 deletions(-)
+ drivers/net/mlx5/mlx5_flow_hw.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
@@ -25 +27 @@
-index 6081ebc7e2..94c9ecd165 100644
+index ca2611942e..5c611b03b9 100644
@@ -28,3 +30,3 @@
-@@ -7873,8 +7873,8 @@ __flow_hw_actions_template_create(struct rte_eth_dev *dev,
- uint32_t tmpl_flags = 0;
- int ret;
+@@ -6526,8 +6526,7 @@ flow_hw_actions_template_create(struct rte_eth_dev *dev,
+ uint32_t expand_mf_num = 0;
+ uint16_t src_off[MLX5_HW_MAX_ACTS] = {0, };
@@ -34,2 +36 @@
-+ if (!nt_mode && mlx5_flow_hw_actions_validate(dev, attr, actions, masks,
-+ &action_flags, error))
++ if (mlx5_flow_hw_actions_validate(dev, attr, actions, masks, &action_flags, error))
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix SWS meter state initialization' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (16 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix non-template flow action validation' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix reported Rx/Tx descriptor limits' " Xueming Li
` (78 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=19021b32a7aa90926974e99dc25a0fb9d9d0b071
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 19021b32a7aa90926974e99dc25a0fb9d9d0b071 Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Sun, 27 Oct 2024 17:31:36 +0200
Subject: [PATCH] net/mlx5: fix SWS meter state initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0c37d8f7ba2cac289896de024d9c58a65ba3ece9 ]
Update the state initialization for SWS meter objects to properly
monitor ASO object availability. The PMD uses the meter 'state'
variable to track whether ASO objects are available for use.
This ensures that the SWS meter object state is correctly
initialized, allowing the PMD to accurately manage ASO resources
for metering functionality.
Fixes: 4359d9d1f76b ("net/mlx5: fix sync meter processing in HWS")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_meter.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c
index a4d954cb1f..1376533604 100644
--- a/drivers/net/mlx5/mlx5_flow_meter.c
+++ b/drivers/net/mlx5/mlx5_flow_meter.c
@@ -1613,6 +1613,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
if (sh->meter_aso_en) {
fm->is_enable = !!is_enable;
aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm);
+ aso_mtr->state = ASO_METER_WAIT;
ret = mlx5_aso_meter_update_by_wqe(priv, MLX5_HW_INV_QUEUE,
aso_mtr, &priv->mtr_bulk,
NULL, true);
@@ -1864,6 +1865,7 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id,
/* If ASO meter supported, update ASO flow meter by wqe. */
if (priv->sh->meter_aso_en) {
aso_mtr = container_of(fm, struct mlx5_aso_mtr, fm);
+ aso_mtr->state = ASO_METER_WAIT;
ret = mlx5_aso_meter_update_by_wqe(priv, MLX5_HW_INV_QUEUE,
aso_mtr, &priv->mtr_bulk, NULL, true);
if (ret)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.757004919 +0800
+++ 0018-net-mlx5-fix-SWS-meter-state-initialization.patch 2024-12-06 23:26:43.883044829 +0800
@@ -1 +1 @@
-From 0c37d8f7ba2cac289896de024d9c58a65ba3ece9 Mon Sep 17 00:00:00 2001
+From 19021b32a7aa90926974e99dc25a0fb9d9d0b071 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0c37d8f7ba2cac289896de024d9c58a65ba3ece9 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 98a61cbdd4..233b140785 100644
+index a4d954cb1f..1376533604 100644
@@ -27 +29 @@
-@@ -1914,6 +1914,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
+@@ -1613,6 +1613,7 @@ mlx5_flow_meter_action_modify(struct mlx5_priv *priv,
@@ -35 +37 @@
-@@ -2165,6 +2166,7 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id,
+@@ -1864,6 +1865,7 @@ mlx5_flow_meter_create(struct rte_eth_dev *dev, uint32_t meter_id,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix reported Rx/Tx descriptor limits' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (17 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix SWS meter state initialization' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix indirect list flow action callback invocation' " Xueming Li
` (77 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Igor Gutorov; +Cc: Xueming Li, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=10f46d35b3254f4a43c268b873ab571fde3c6722
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 10f46d35b3254f4a43c268b873ab571fde3c6722 Mon Sep 17 00:00:00 2001
From: Igor Gutorov <igootorov@gmail.com>
Date: Wed, 7 Aug 2024 23:44:05 +0300
Subject: [PATCH] net/mlx5: fix reported Rx/Tx descriptor limits
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4c3d7961d9002bb715a8ee76bcf464d633316d4c ]
Currently, `rte_eth_dev_info.rx_desc_lim.nb_max` as well as
`rte_eth_dev_info.tx_desc_lim.nb_max` shows 65535 as the limit,
which results in a few problems:
* It is not the actual Rx/Tx queue limit
* Allocating an Rx queue and passing `rx_desc_lim.nb_max` results in an
integer overflow and 0 ring size:
```
rte_eth_rx_queue_setup(0, 0, rx_desc_lim.nb_max, 0, NULL, mb_pool);
```
Which overflows ring size and generates the following log:
```
mlx5_net: port 0 increased number of descriptors in Rx queue 0 to the
next power of two (0)
```
The same holds for allocating a Tx queue.
Fixes: e60fbd5b24fc ("mlx5: add device configure/start/stop")
Signed-off-by: Igor Gutorov <igootorov@gmail.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 1 +
drivers/common/mlx5/mlx5_devx_cmds.h | 1 +
drivers/net/mlx5/mlx5_ethdev.c | 4 ++++
drivers/net/mlx5/mlx5_rxq.c | 8 ++++++++
drivers/net/mlx5/mlx5_txq.c | 8 ++++++++
5 files changed, 22 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 630ab96a8f..9e2d7ce86f 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1019,6 +1019,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
attr->log_max_qp = MLX5_GET(cmd_hca_cap, hcattr, log_max_qp);
attr->log_max_cq_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_cq_sz);
attr->log_max_qp_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_qp_sz);
+ attr->log_max_wq_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_wq_sz);
attr->log_max_mrw_sz = MLX5_GET(cmd_hca_cap, hcattr, log_max_mrw_sz);
attr->log_max_pd = MLX5_GET(cmd_hca_cap, hcattr, log_max_pd);
attr->log_max_srq = MLX5_GET(cmd_hca_cap, hcattr, log_max_srq);
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index b814c8becc..028cf2abb9 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -264,6 +264,7 @@ struct mlx5_hca_attr {
struct mlx5_hca_flow_attr flow;
struct mlx5_hca_flex_attr flex;
struct mlx5_hca_crypto_mmo_attr crypto_mmo;
+ uint8_t log_max_wq_sz;
int log_max_qp_sz;
int log_max_cq_sz;
int log_max_qp;
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index ec4bdd8af1..8f29e58cda 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -351,6 +351,10 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
info->flow_type_rss_offloads = ~MLX5_RSS_HF_MASK;
mlx5_set_default_params(dev, info);
mlx5_set_txlimit_params(dev, info);
+ info->rx_desc_lim.nb_max =
+ 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz;
+ info->tx_desc_lim.nb_max =
+ 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz;
if (priv->sh->cdev->config.hca_attr.mem_rq_rmp &&
priv->obj_ops.rxq_obj_new == devx_obj_ops.rxq_obj_new)
info->dev_capa |= RTE_ETH_DEV_CAPA_RXQ_SHARE;
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index e45cca9133..82d8f29b31 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -655,6 +655,14 @@ mlx5_rx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc,
struct mlx5_rxq_priv *rxq;
bool empty;
+ if (*desc > 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz) {
+ DRV_LOG(ERR,
+ "port %u number of descriptors requested for Rx queue"
+ " %u is more than supported",
+ dev->data->port_id, idx);
+ rte_errno = EINVAL;
+ return -EINVAL;
+ }
if (!rte_is_power_of_2(*desc)) {
*desc = 1 << log2above(*desc);
DRV_LOG(WARNING,
diff --git a/drivers/net/mlx5/mlx5_txq.c b/drivers/net/mlx5/mlx5_txq.c
index aac078a6ed..52a39ae073 100644
--- a/drivers/net/mlx5/mlx5_txq.c
+++ b/drivers/net/mlx5/mlx5_txq.c
@@ -332,6 +332,14 @@ mlx5_tx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ if (*desc > 1 << priv->sh->cdev->config.hca_attr.log_max_wq_sz) {
+ DRV_LOG(ERR,
+ "port %u number of descriptors requested for Tx queue"
+ " %u is more than supported",
+ dev->data->port_id, idx);
+ rte_errno = EINVAL;
+ return -EINVAL;
+ }
if (*desc <= MLX5_TX_COMP_THRESH) {
DRV_LOG(WARNING,
"port %u number of descriptors requested for Tx queue"
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.788275618 +0800
+++ 0019-net-mlx5-fix-reported-Rx-Tx-descriptor-limits.patch 2024-12-06 23:26:43.893044828 +0800
@@ -1 +1 @@
-From 4c3d7961d9002bb715a8ee76bcf464d633316d4c Mon Sep 17 00:00:00 2001
+From 10f46d35b3254f4a43c268b873ab571fde3c6722 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4c3d7961d9002bb715a8ee76bcf464d633316d4c ]
@@ -26 +28,0 @@
-Cc: stable@dpdk.org
@@ -39 +41 @@
-index 9710dcedd3..a75f011750 100644
+index 630ab96a8f..9e2d7ce86f 100644
@@ -42 +44 @@
-@@ -1027,6 +1027,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
+@@ -1019,6 +1019,7 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
@@ -51 +53 @@
-index 6cf7999c46..2ad9e5414f 100644
+index b814c8becc..028cf2abb9 100644
@@ -54 +56 @@
-@@ -267,6 +267,7 @@ struct mlx5_hca_attr {
+@@ -264,6 +264,7 @@ struct mlx5_hca_attr {
@@ -63 +65 @@
-index 6f24d649e0..7708a0b808 100644
+index ec4bdd8af1..8f29e58cda 100644
@@ -66 +68 @@
-@@ -359,6 +359,10 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
+@@ -351,6 +351,10 @@ mlx5_dev_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *info)
@@ -78 +80 @@
-index c6655b7db4..5eac224b76 100644
+index e45cca9133..82d8f29b31 100644
@@ -97 +99 @@
-index f05534e168..3e93517323 100644
+index aac078a6ed..52a39ae073 100644
@@ -100 +102 @@
-@@ -333,6 +333,14 @@ mlx5_tx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
+@@ -332,6 +332,14 @@ mlx5_tx_queue_pre_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t *desc)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix indirect list flow action callback invocation' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (18 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix reported Rx/Tx descriptor limits' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'app/dumpcap: remove unused struct array' " Xueming Li
` (76 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gregory Etelson; +Cc: Xueming Li, Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=933bc295be4f8cd04dbbbb204965fd999a50ef7b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 933bc295be4f8cd04dbbbb204965fd999a50ef7b Mon Sep 17 00:00:00 2001
From: Gregory Etelson <getelson@nvidia.com>
Date: Mon, 28 Oct 2024 11:43:50 +0200
Subject: [PATCH] net/mlx5: fix indirect list flow action callback invocation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e53e4c39d2514667a7065cb0dd2d8fe3dcd843e3 ]
Fix indirect action list callback parameter.
The function must be called with a flow action to process - not with
actions list.
Fixes: e26f50adbf38 ("net/mlx5: support indirect list meter mark action")
Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 5c611b03b9..0eaf38537a 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3033,7 +3033,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
(int)action->type == act_data->type);
switch ((int)act_data->type) {
case RTE_FLOW_ACTION_TYPE_INDIRECT_LIST:
- act_data->indirect_list_cb(dev, act_data, actions,
+ act_data->indirect_list_cb(dev, act_data, action,
&rule_acts[act_data->action_dst]);
break;
case RTE_FLOW_ACTION_TYPE_INDIRECT:
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.819232718 +0800
+++ 0020-net-mlx5-fix-indirect-list-flow-action-callback-invo.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From e53e4c39d2514667a7065cb0dd2d8fe3dcd843e3 Mon Sep 17 00:00:00 2001
+From 933bc295be4f8cd04dbbbb204965fd999a50ef7b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e53e4c39d2514667a7065cb0dd2d8fe3dcd843e3 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 804b58db5c..c13b5dc65c 100644
+index 5c611b03b9..0eaf38537a 100644
@@ -23 +25 @@
-@@ -3552,7 +3552,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
+@@ -3033,7 +3033,7 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/dumpcap: remove unused struct array' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (19 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/mlx5: fix indirect list flow action callback invocation' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'bus/fslmc: fix Coverity warnings in QBMAN' " Xueming Li
` (75 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Xueming Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=58645a6a0fd9f6888b8dfbd2c0a552daf9b70011
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 58645a6a0fd9f6888b8dfbd2c0a552daf9b70011 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Tue, 5 Nov 2024 11:27:21 +0000
Subject: [PATCH] app/dumpcap: remove unused struct array
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9bbd44d63846cf0771ec0f1c7e1b5a63ec5e9603 ]
The callbacks(rx_cb) member of struct interface was unused inside
dumpcap, but was taking up a lot of memory space, since it was scaled
according to RTE_MAX_QUEUES_PER_PORT, which is 1k by default. Save
memory by removing the whole array.
Fixes: cbb44143be74 ("app/dumpcap: add new packet capture application")
Suggested-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
app/dumpcap/main.c | 1 -
1 file changed, 1 deletion(-)
diff --git a/app/dumpcap/main.c b/app/dumpcap/main.c
index 213e764c2e..5eaab6350f 100644
--- a/app/dumpcap/main.c
+++ b/app/dumpcap/main.c
@@ -93,7 +93,6 @@ struct interface {
struct rte_bpf_prm *bpf_prm;
char name[RTE_ETH_NAME_MAX_LEN];
- struct rte_rxtx_callback *rx_cb[RTE_MAX_QUEUES_PER_PORT];
const char *ifname;
const char *ifdescr;
};
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.855842618 +0800
+++ 0021-app-dumpcap-remove-unused-struct-array.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From 9bbd44d63846cf0771ec0f1c7e1b5a63ec5e9603 Mon Sep 17 00:00:00 2001
+From 58645a6a0fd9f6888b8dfbd2c0a552daf9b70011 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9bbd44d63846cf0771ec0f1c7e1b5a63ec5e9603 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 4031f48441..3d3c0dbc66 100644
+index 213e764c2e..5eaab6350f 100644
@@ -25 +27 @@
-@@ -96,7 +96,6 @@ struct interface {
+@@ -93,7 +93,6 @@ struct interface {
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bus/fslmc: fix Coverity warnings in QBMAN' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (20 preceding siblings ...)
2024-12-07 7:59 ` patch 'app/dumpcap: remove unused struct array' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/dpaa2: fix memory corruption in TM' " Xueming Li
` (74 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Rohit Raj; +Cc: Xueming Li, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0f7976a1f975d3cd2215d543feb27cec214f14a8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0f7976a1f975d3cd2215d543feb27cec214f14a8 Mon Sep 17 00:00:00 2001
From: Rohit Raj <rohit.raj@nxp.com>
Date: Wed, 23 Oct 2024 17:29:32 +0530
Subject: [PATCH] bus/fslmc: fix Coverity warnings in QBMAN
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 051f4185f98faa964b6a965b2e8e7b2da68969de ]
Fix Issues reported by NXP Internal Coverity.
Fixes: 64f131a82fbe ("bus/fslmc: add qbman debug")
Signed-off-by: Rohit Raj <rohit.raj@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/fslmc/qbman/qbman_debug.c | 49 +++++++++++++++++----------
1 file changed, 32 insertions(+), 17 deletions(-)
diff --git a/drivers/bus/fslmc/qbman/qbman_debug.c b/drivers/bus/fslmc/qbman/qbman_debug.c
index eea06988ff..0e471ec3fd 100644
--- a/drivers/bus/fslmc/qbman/qbman_debug.c
+++ b/drivers/bus/fslmc/qbman/qbman_debug.c
@@ -1,6 +1,6 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2015 Freescale Semiconductor, Inc.
- * Copyright 2018-2020 NXP
+ * Copyright 2018-2020,2022 NXP
*/
#include "compat.h"
@@ -37,6 +37,7 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
struct qbman_bp_query_rslt *r)
{
struct qbman_bp_query_desc *p;
+ struct qbman_bp_query_rslt *bp_query_rslt;
/* Start the management command */
p = (struct qbman_bp_query_desc *)qbman_swp_mc_start(s);
@@ -47,14 +48,16 @@ int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
p->bpid = bpid;
/* Complete the management command */
- *r = *(struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s, p,
- QBMAN_BP_QUERY);
- if (!r) {
+ bp_query_rslt = (struct qbman_bp_query_rslt *)qbman_swp_mc_complete(s,
+ p, QBMAN_BP_QUERY);
+ if (!bp_query_rslt) {
pr_err("qbman: Query BPID %d failed, no response\n",
bpid);
return -EIO;
}
+ *r = *bp_query_rslt;
+
/* Decode the outcome */
QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_BP_QUERY);
@@ -202,20 +205,23 @@ int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
struct qbman_fq_query_rslt *r)
{
struct qbman_fq_query_desc *p;
+ struct qbman_fq_query_rslt *fq_query_rslt;
p = (struct qbman_fq_query_desc *)qbman_swp_mc_start(s);
if (!p)
return -EBUSY;
p->fqid = fqid;
- *r = *(struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s, p,
- QBMAN_FQ_QUERY);
- if (!r) {
+ fq_query_rslt = (struct qbman_fq_query_rslt *)qbman_swp_mc_complete(s,
+ p, QBMAN_FQ_QUERY);
+ if (!fq_query_rslt) {
pr_err("qbman: Query FQID %d failed, no response\n",
fqid);
return -EIO;
}
+ *r = *fq_query_rslt;
+
/* Decode the outcome */
QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_FQ_QUERY);
@@ -398,20 +404,23 @@ int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
struct qbman_cgr_query_rslt *r)
{
struct qbman_cgr_query_desc *p;
+ struct qbman_cgr_query_rslt *cgr_query_rslt;
p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
if (!p)
return -EBUSY;
p->cgid = cgid;
- *r = *(struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s, p,
- QBMAN_CGR_QUERY);
- if (!r) {
+ cgr_query_rslt = (struct qbman_cgr_query_rslt *)qbman_swp_mc_complete(s,
+ p, QBMAN_CGR_QUERY);
+ if (!cgr_query_rslt) {
pr_err("qbman: Query CGID %d failed, no response\n",
cgid);
return -EIO;
}
+ *r = *cgr_query_rslt;
+
/* Decode the outcome */
QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_CGR_QUERY);
@@ -473,20 +482,23 @@ int qbman_cgr_wred_query(struct qbman_swp *s, uint32_t cgid,
struct qbman_wred_query_rslt *r)
{
struct qbman_cgr_query_desc *p;
+ struct qbman_wred_query_rslt *wred_query_rslt;
p = (struct qbman_cgr_query_desc *)qbman_swp_mc_start(s);
if (!p)
return -EBUSY;
p->cgid = cgid;
- *r = *(struct qbman_wred_query_rslt *)qbman_swp_mc_complete(s, p,
- QBMAN_WRED_QUERY);
- if (!r) {
+ wred_query_rslt = (struct qbman_wred_query_rslt *)qbman_swp_mc_complete(
+ s, p, QBMAN_WRED_QUERY);
+ if (!wred_query_rslt) {
pr_err("qbman: Query CGID WRED %d failed, no response\n",
cgid);
return -EIO;
}
+ *r = *wred_query_rslt;
+
/* Decode the outcome */
QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WRED_QUERY);
@@ -527,7 +539,7 @@ void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
if (mn == 0)
*maxth = ma;
else
- *maxth = ((ma+256) * (1<<(mn-1)));
+ *maxth = ((uint64_t)(ma+256) * (1<<(mn-1)));
if (step_s == 0)
*minth = *maxth - step_i;
@@ -630,6 +642,7 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
struct qbman_wqchan_query_rslt *r)
{
struct qbman_wqchan_query_desc *p;
+ struct qbman_wqchan_query_rslt *wqchan_query_rslt;
/* Start the management command */
p = (struct qbman_wqchan_query_desc *)qbman_swp_mc_start(s);
@@ -640,14 +653,16 @@ int qbman_wqchan_query(struct qbman_swp *s, uint16_t chanid,
p->chid = chanid;
/* Complete the management command */
- *r = *(struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(s, p,
- QBMAN_WQ_QUERY);
- if (!r) {
+ wqchan_query_rslt = (struct qbman_wqchan_query_rslt *)qbman_swp_mc_complete(
+ s, p, QBMAN_WQ_QUERY);
+ if (!wqchan_query_rslt) {
pr_err("qbman: Query WQ Channel %d failed, no response\n",
chanid);
return -EIO;
}
+ *r = *wqchan_query_rslt;
+
/* Decode the outcome */
QBMAN_BUG_ON((r->verb & QBMAN_RESPONSE_VERB_MASK) != QBMAN_WQ_QUERY);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.882405617 +0800
+++ 0022-bus-fslmc-fix-Coverity-warnings-in-QBMAN.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From 051f4185f98faa964b6a965b2e8e7b2da68969de Mon Sep 17 00:00:00 2001
+From 0f7976a1f975d3cd2215d543feb27cec214f14a8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 051f4185f98faa964b6a965b2e8e7b2da68969de ]
@@ -9 +11,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/dpaa2: fix memory corruption in TM' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (21 preceding siblings ...)
2024-12-07 7:59 ` patch 'bus/fslmc: fix Coverity warnings in QBMAN' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'examples/l3fwd-power: fix options parsing overflow' " Xueming Li
` (73 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Gagandeep Singh; +Cc: Xueming Li, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2069d39f4189a95a38b119191ce0a48c83f1a1d4
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2069d39f4189a95a38b119191ce0a48c83f1a1d4 Mon Sep 17 00:00:00 2001
From: Gagandeep Singh <g.singh@nxp.com>
Date: Wed, 23 Oct 2024 17:29:47 +0530
Subject: [PATCH] net/dpaa2: fix memory corruption in TM
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d77cb0c44cf6b68dc71684bd302fd3138b36e5f1 ]
driver was reserving memory in an array for 8 queues only,
but it can support many more queues configuration.
This patch fixes the memory corruption issue by defining the
queue array with correct size.
Fixes: 72100f0dee21 ("net/dpaa2: support level 2 in traffic management")
Signed-off-by: Gagandeep Singh <g.singh@nxp.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/net/dpaa2/dpaa2_tm.c | 29 ++++++++++++++++++-----------
1 file changed, 18 insertions(+), 11 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_tm.c b/drivers/net/dpaa2/dpaa2_tm.c
index 3c0f282ec3..c4efdf0af8 100644
--- a/drivers/net/dpaa2/dpaa2_tm.c
+++ b/drivers/net/dpaa2/dpaa2_tm.c
@@ -684,6 +684,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
struct dpaa2_tm_node *leaf_node, *temp_leaf_node, *channel_node;
struct fsl_mc_io *dpni = (struct fsl_mc_io *)dev->process_private;
int ret, t;
+ bool conf_schedule = false;
/* Populate TCs */
LIST_FOREACH(channel_node, &priv->nodes, next) {
@@ -757,7 +758,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
}
LIST_FOREACH(channel_node, &priv->nodes, next) {
- int wfq_grp = 0, is_wfq_grp = 0, conf[DPNI_MAX_TC];
+ int wfq_grp = 0, is_wfq_grp = 0, conf[priv->nb_tx_queues];
struct dpni_tx_priorities_cfg prio_cfg;
memset(&prio_cfg, 0, sizeof(prio_cfg));
@@ -767,6 +768,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
if (channel_node->level_id != CHANNEL_LEVEL)
continue;
+ conf_schedule = false;
LIST_FOREACH(leaf_node, &priv->nodes, next) {
struct dpaa2_queue *leaf_dpaa2_q;
uint8_t leaf_tc_id;
@@ -789,6 +791,7 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
if (leaf_node->parent != channel_node)
continue;
+ conf_schedule = true;
leaf_dpaa2_q = (struct dpaa2_queue *)dev->data->tx_queues[leaf_node->id];
leaf_tc_id = leaf_dpaa2_q->tc_index;
/* Process sibling leaf nodes */
@@ -829,8 +832,8 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
goto out;
}
is_wfq_grp = 1;
- conf[temp_leaf_node->id] = 1;
}
+ conf[temp_leaf_node->id] = 1;
}
if (is_wfq_grp) {
if (wfq_grp == 0) {
@@ -851,6 +854,9 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
}
conf[leaf_node->id] = 1;
}
+ if (!conf_schedule)
+ continue;
+
if (wfq_grp > 1) {
prio_cfg.separate_groups = 1;
if (prio_cfg.prio_group_B < prio_cfg.prio_group_A) {
@@ -864,6 +870,16 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
prio_cfg.prio_group_A = 1;
prio_cfg.channel_idx = channel_node->channel_id;
+ DPAA2_PMD_DEBUG("########################################");
+ DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
+ for (t = 0; t < DPNI_MAX_TC; t++)
+ DPAA2_PMD_DEBUG("tc = %d mode = %d, delta = %d", t,
+ prio_cfg.tc_sched[t].mode,
+ prio_cfg.tc_sched[t].delta_bandwidth);
+
+ DPAA2_PMD_DEBUG("prioritya = %d, priorityb = %d, separate grps"
+ " = %d", prio_cfg.prio_group_A,
+ prio_cfg.prio_group_B, prio_cfg.separate_groups);
ret = dpni_set_tx_priorities(dpni, 0, priv->token, &prio_cfg);
if (ret) {
ret = -rte_tm_error_set(error, EINVAL,
@@ -871,15 +887,6 @@ dpaa2_hierarchy_commit(struct rte_eth_dev *dev, int clear_on_fail,
"Scheduling Failed\n");
goto out;
}
- DPAA2_PMD_DEBUG("########################################");
- DPAA2_PMD_DEBUG("Channel idx = %d", prio_cfg.channel_idx);
- for (t = 0; t < DPNI_MAX_TC; t++) {
- DPAA2_PMD_DEBUG("tc = %d mode = %d ", t, prio_cfg.tc_sched[t].mode);
- DPAA2_PMD_DEBUG("delta = %d", prio_cfg.tc_sched[t].delta_bandwidth);
- }
- DPAA2_PMD_DEBUG("prioritya = %d", prio_cfg.prio_group_A);
- DPAA2_PMD_DEBUG("priorityb = %d", prio_cfg.prio_group_B);
- DPAA2_PMD_DEBUG("separate grps = %d", prio_cfg.separate_groups);
}
return 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.912192917 +0800
+++ 0023-net-dpaa2-fix-memory-corruption-in-TM.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From d77cb0c44cf6b68dc71684bd302fd3138b36e5f1 Mon Sep 17 00:00:00 2001
+From 2069d39f4189a95a38b119191ce0a48c83f1a1d4 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d77cb0c44cf6b68dc71684bd302fd3138b36e5f1 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index fb8c384ca4..ab3e355853 100644
+index 3c0f282ec3..c4efdf0af8 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/l3fwd-power: fix options parsing overflow' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (22 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/dpaa2: fix memory corruption in TM' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'examples/l3fwd: fix read beyond boundaries' " Xueming Li
` (72 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Huisong Li
Cc: Xueming Li, Konstantin Ananyev, Chengwen Feng,
Sivaprasad Tummala, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d24b14e13169592f838b16a8ec12d6c37e1c537e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d24b14e13169592f838b16a8ec12d6c37e1c537e Mon Sep 17 00:00:00 2001
From: Huisong Li <lihuisong@huawei.com>
Date: Mon, 11 Nov 2024 10:25:49 +0800
Subject: [PATCH] examples/l3fwd-power: fix options parsing overflow
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0bc4795d5994459a3d261afd7f843eb0cabdecf5 ]
Many variables are 'uint32_t', like, 'pause_duration', 'scale_freq_min'
and so on. They use parse_int() to parse it from command line.
But overflow problem occurs when this function return.
Fixes: 59f2853c4cae ("examples/l3fwd_power: add configuration options")
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com>
---
examples/l3fwd-power/main.c | 41 +++++++++++++++++++------------------
1 file changed, 21 insertions(+), 20 deletions(-)
diff --git a/examples/l3fwd-power/main.c b/examples/l3fwd-power/main.c
index 996ac6dc56..7640b5a9a3 100644
--- a/examples/l3fwd-power/main.c
+++ b/examples/l3fwd-power/main.c
@@ -1523,8 +1523,12 @@ print_usage(const char *prgname)
prgname);
}
+/*
+ * Caller must give the right upper limit so as to ensure receiver variable
+ * doesn't overflow.
+ */
static int
-parse_int(const char *opt)
+parse_uint(const char *opt, uint32_t max, uint32_t *res)
{
char *end = NULL;
unsigned long val;
@@ -1534,23 +1538,15 @@ parse_int(const char *opt)
if ((opt[0] == '\0') || (end == NULL) || (*end != '\0'))
return -1;
- return val;
-}
-
-static int parse_max_pkt_len(const char *pktlen)
-{
- char *end = NULL;
- unsigned long len;
-
- /* parse decimal string */
- len = strtoul(pktlen, &end, 10);
- if ((pktlen[0] == '\0') || (end == NULL) || (*end != '\0'))
+ if (val > max) {
+ RTE_LOG(ERR, L3FWD_POWER, "%s parameter shouldn't exceed %u.\n",
+ opt, max);
return -1;
+ }
- if (len == 0)
- return -1;
+ *res = val;
- return len;
+ return 0;
}
static int
@@ -1897,8 +1893,9 @@ parse_args(int argc, char **argv)
if (!strncmp(lgopts[option_index].name,
CMD_LINE_OPT_MAX_PKT_LEN,
sizeof(CMD_LINE_OPT_MAX_PKT_LEN))) {
+ if (parse_uint(optarg, UINT32_MAX, &max_pkt_len) != 0)
+ return -1;
printf("Custom frame size is configured\n");
- max_pkt_len = parse_max_pkt_len(optarg);
}
if (!strncmp(lgopts[option_index].name,
@@ -1911,29 +1908,33 @@ parse_args(int argc, char **argv)
if (!strncmp(lgopts[option_index].name,
CMD_LINE_OPT_MAX_EMPTY_POLLS,
sizeof(CMD_LINE_OPT_MAX_EMPTY_POLLS))) {
+ if (parse_uint(optarg, UINT32_MAX, &max_empty_polls) != 0)
+ return -1;
printf("Maximum empty polls configured\n");
- max_empty_polls = parse_int(optarg);
}
if (!strncmp(lgopts[option_index].name,
CMD_LINE_OPT_PAUSE_DURATION,
sizeof(CMD_LINE_OPT_PAUSE_DURATION))) {
+ if (parse_uint(optarg, UINT32_MAX, &pause_duration) != 0)
+ return -1;
printf("Pause duration configured\n");
- pause_duration = parse_int(optarg);
}
if (!strncmp(lgopts[option_index].name,
CMD_LINE_OPT_SCALE_FREQ_MIN,
sizeof(CMD_LINE_OPT_SCALE_FREQ_MIN))) {
+ if (parse_uint(optarg, UINT32_MAX, &scale_freq_min) != 0)
+ return -1;
printf("Scaling frequency minimum configured\n");
- scale_freq_min = parse_int(optarg);
}
if (!strncmp(lgopts[option_index].name,
CMD_LINE_OPT_SCALE_FREQ_MAX,
sizeof(CMD_LINE_OPT_SCALE_FREQ_MAX))) {
+ if (parse_uint(optarg, UINT32_MAX, &scale_freq_max) != 0)
+ return -1;
printf("Scaling frequency maximum configured\n");
- scale_freq_max = parse_int(optarg);
}
break;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.938557317 +0800
+++ 0024-examples-l3fwd-power-fix-options-parsing-overflow.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From 0bc4795d5994459a3d261afd7f843eb0cabdecf5 Mon Sep 17 00:00:00 2001
+From d24b14e13169592f838b16a8ec12d6c37e1c537e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0bc4795d5994459a3d261afd7f843eb0cabdecf5 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 272e069207..7bc524aa16 100644
+index 996ac6dc56..7640b5a9a3 100644
@@ -25 +27 @@
-@@ -1520,8 +1520,12 @@ print_usage(const char *prgname)
+@@ -1523,8 +1523,12 @@ print_usage(const char *prgname)
@@ -39 +41 @@
-@@ -1531,23 +1535,15 @@ parse_int(const char *opt)
+@@ -1534,23 +1538,15 @@ parse_int(const char *opt)
@@ -69 +71 @@
-@@ -1894,8 +1890,9 @@ parse_args(int argc, char **argv)
+@@ -1897,8 +1893,9 @@ parse_args(int argc, char **argv)
@@ -80 +82 @@
-@@ -1908,29 +1905,33 @@ parse_args(int argc, char **argv)
+@@ -1911,29 +1908,33 @@ parse_args(int argc, char **argv)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/l3fwd: fix read beyond boundaries' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (23 preceding siblings ...)
2024-12-07 7:59 ` patch 'examples/l3fwd-power: fix options parsing overflow' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'test/bonding: remove redundant info query' " Xueming Li
` (71 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Konstantin Ananyev; +Cc: Xueming Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=66749b0785fc75a28a32fbbcfbf1085f0dabbb53
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 66749b0785fc75a28a32fbbcfbf1085f0dabbb53 Mon Sep 17 00:00:00 2001
From: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Date: Thu, 7 Nov 2024 13:50:51 -0500
Subject: [PATCH] examples/l3fwd: fix read beyond boundaries
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit ebab0e8b2257aa049dd35dedc7efd230b0f45b88 ]
ASAN report:
ERROR: AddressSanitizer: unknown-crash on address 0x7ffffef92e32 at pc 0x00000053d1e9 bp 0x7ffffef92c00 sp 0x7ffffef92bf8
READ of size 16 at 0x7ffffef92e32 thread T0
#0 0x53d1e8 in _mm_loadu_si128 /usr/lib64/gcc/x86_64-suse-linux/11/include/emmintrin.h:703
#1 0x53d1e8 in send_packets_multi ../examples/l3fwd/l3fwd_sse.h:125
#2 0x53d1e8 in acl_send_packets ../examples/l3fwd/l3fwd_acl.c:1048
#3 0x53ec18 in acl_main_loop ../examples/l3fwd/l3fwd_acl.c:1127
#4 0x12151eb in rte_eal_mp_remote_launch ../lib/eal/common/eal_common_launch.c:83
#5 0x5bf2df in main ../examples/l3fwd/main.c:1647
#6 0x7f6d42a0d2bc in __libc_start_main (/lib64/libc.so.6+0x352bc)
#7 0x527499 in _start (/home/kananyev/dpdk-l3fwd-acl/x86_64-native-linuxapp-gcc-dbg-b1/examples/dpdk-l3fwd+0x527499)
Reason for that is that send_packets_multi() uses 16B loads to access
input dst_port[]and might read beyond array boundaries.
Right now, it doesn't cause any real issue - junk values are ignored, also
inside l3fwd we always allocate dst_port[] array on the stack, so
memory beyond it is always available.
Anyway, it probably need to be fixed.
The patch below simply allocates extra space for dst_port[], so
send_packets_multi() will never read beyond its boundaries.
Probably a better fix would be to change send_packets_multi()
itself to avoid access beyond 'nb_rx' entries.
Bugzilla ID: 1502
Fixes: 94c54b4158d5 ("examples/l3fwd: rework exact-match")
Signed-off-by: Konstantin Ananyev <konstantin.ananyev@huawei.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
.mailmap | 2 +-
examples/l3fwd/l3fwd_altivec.h | 6 +++++-
examples/l3fwd/l3fwd_common.h | 7 +++++++
examples/l3fwd/l3fwd_em_hlm.h | 2 +-
examples/l3fwd/l3fwd_em_sequential.h | 2 +-
examples/l3fwd/l3fwd_fib.c | 2 +-
examples/l3fwd/l3fwd_lpm_altivec.h | 2 +-
examples/l3fwd/l3fwd_lpm_neon.h | 2 +-
examples/l3fwd/l3fwd_lpm_sse.h | 2 +-
examples/l3fwd/l3fwd_neon.h | 6 +++++-
examples/l3fwd/l3fwd_sse.h | 6 +++++-
11 files changed, 29 insertions(+), 10 deletions(-)
diff --git a/.mailmap b/.mailmap
index 7990edf2d0..879d0fd81c 100644
--- a/.mailmap
+++ b/.mailmap
@@ -770,7 +770,7 @@ Kirill Rybalchenko <kirill.rybalchenko@intel.com>
Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Klaus Degner <kd@allegro-packets.com>
Kommula Shiva Shankar <kshankar@marvell.com>
-Konstantin Ananyev <konstantin.v.ananyev@yandex.ru> <konstantin.ananyev@huawei.com> <konstantin.ananyev@intel.com>
+Konstantin Ananyev <konstantin.ananyev@huawei.com> <konstantin.v.ananyev@yandex.ru> <konstantin.ananyev@intel.com>
Krishna Murthy <krishna.j.murthy@intel.com>
Krzysztof Galazka <krzysztof.galazka@intel.com>
Krzysztof Kanas <kkanas@marvell.com> <krzysztof.kanas@caviumnetworks.com>
diff --git a/examples/l3fwd/l3fwd_altivec.h b/examples/l3fwd/l3fwd_altivec.h
index e45e138e59..b91a6b5587 100644
--- a/examples/l3fwd/l3fwd_altivec.h
+++ b/examples/l3fwd/l3fwd_altivec.h
@@ -11,6 +11,9 @@
#include "altivec/port_group.h"
#include "l3fwd_common.h"
+#undef SENDM_PORT_OVERHEAD
+#define SENDM_PORT_OVERHEAD(x) ((x) + 2 * FWDSTEP)
+
/*
* Update source and destination MAC addresses in the ethernet header.
* Perform RFC1812 checks and updates for IPV4 packets.
@@ -117,7 +120,8 @@ process_packet(struct rte_mbuf *pkt, uint16_t *dst_port)
*/
static __rte_always_inline void
send_packets_multi(struct lcore_conf *qconf, struct rte_mbuf **pkts_burst,
- uint16_t dst_port[MAX_PKT_BURST], int nb_rx)
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)],
+ int nb_rx)
{
int32_t k;
int j = 0;
diff --git a/examples/l3fwd/l3fwd_common.h b/examples/l3fwd/l3fwd_common.h
index 224b1c08e8..d94e5f1357 100644
--- a/examples/l3fwd/l3fwd_common.h
+++ b/examples/l3fwd/l3fwd_common.h
@@ -18,6 +18,13 @@
/* Minimum value of IPV4 total length (20B) in network byte order. */
#define IPV4_MIN_LEN_BE (sizeof(struct rte_ipv4_hdr) << 8)
+/*
+ * send_packet_multi() specific number of dest ports
+ * due to implementation we need to allocate array bigger then
+ * actual max number of elements in the array.
+ */
+#define SENDM_PORT_OVERHEAD(x) (x)
+
/*
* From http://www.rfc-editor.org/rfc/rfc1812.txt section 5.2.2:
* - The IP version number must be 4.
diff --git a/examples/l3fwd/l3fwd_em_hlm.h b/examples/l3fwd/l3fwd_em_hlm.h
index 31cda9ddc1..c1d819997a 100644
--- a/examples/l3fwd/l3fwd_em_hlm.h
+++ b/examples/l3fwd/l3fwd_em_hlm.h
@@ -249,7 +249,7 @@ static inline void
l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid,
struct lcore_conf *qconf)
{
- uint16_t dst_port[MAX_PKT_BURST];
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
l3fwd_em_process_packets(nb_rx, pkts_burst, dst_port, portid, qconf, 0);
send_packets_multi(qconf, pkts_burst, dst_port, nb_rx);
diff --git a/examples/l3fwd/l3fwd_em_sequential.h b/examples/l3fwd/l3fwd_em_sequential.h
index 067f23889a..3a40b2e434 100644
--- a/examples/l3fwd/l3fwd_em_sequential.h
+++ b/examples/l3fwd/l3fwd_em_sequential.h
@@ -79,7 +79,7 @@ l3fwd_em_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
uint16_t portid, struct lcore_conf *qconf)
{
int32_t i, j;
- uint16_t dst_port[MAX_PKT_BURST];
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
if (nb_rx > 0) {
rte_prefetch0(rte_pktmbuf_mtod(pkts_burst[0],
diff --git a/examples/l3fwd/l3fwd_fib.c b/examples/l3fwd/l3fwd_fib.c
index f38b19af3f..a36330119a 100644
--- a/examples/l3fwd/l3fwd_fib.c
+++ b/examples/l3fwd/l3fwd_fib.c
@@ -121,7 +121,7 @@ fib_send_packets(int nb_rx, struct rte_mbuf **pkts_burst,
{
uint32_t ipv4_arr[nb_rx];
uint8_t ipv6_arr[nb_rx][RTE_FIB6_IPV6_ADDR_SIZE];
- uint16_t hops[nb_rx];
+ uint16_t hops[SENDM_PORT_OVERHEAD(nb_rx)];
uint64_t hopsv4[nb_rx], hopsv6[nb_rx];
uint8_t type_arr[nb_rx];
uint32_t ipv4_cnt = 0, ipv6_cnt = 0;
diff --git a/examples/l3fwd/l3fwd_lpm_altivec.h b/examples/l3fwd/l3fwd_lpm_altivec.h
index adb82f1478..91aad5c313 100644
--- a/examples/l3fwd/l3fwd_lpm_altivec.h
+++ b/examples/l3fwd/l3fwd_lpm_altivec.h
@@ -145,7 +145,7 @@ static inline void
l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint8_t portid,
struct lcore_conf *qconf)
{
- uint16_t dst_port[MAX_PKT_BURST];
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
l3fwd_lpm_process_packets(nb_rx, pkts_burst, portid, dst_port, qconf,
0);
diff --git a/examples/l3fwd/l3fwd_lpm_neon.h b/examples/l3fwd/l3fwd_lpm_neon.h
index 2a68c4c15e..3c1f827424 100644
--- a/examples/l3fwd/l3fwd_lpm_neon.h
+++ b/examples/l3fwd/l3fwd_lpm_neon.h
@@ -171,7 +171,7 @@ static inline void
l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid,
struct lcore_conf *qconf)
{
- uint16_t dst_port[MAX_PKT_BURST];
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
l3fwd_lpm_process_packets(nb_rx, pkts_burst, portid, dst_port, qconf,
0);
diff --git a/examples/l3fwd/l3fwd_lpm_sse.h b/examples/l3fwd/l3fwd_lpm_sse.h
index db15030320..50f1abbd8a 100644
--- a/examples/l3fwd/l3fwd_lpm_sse.h
+++ b/examples/l3fwd/l3fwd_lpm_sse.h
@@ -129,7 +129,7 @@ static inline void
l3fwd_lpm_send_packets(int nb_rx, struct rte_mbuf **pkts_burst, uint16_t portid,
struct lcore_conf *qconf)
{
- uint16_t dst_port[MAX_PKT_BURST];
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
l3fwd_lpm_process_packets(nb_rx, pkts_burst, portid, dst_port, qconf,
0);
diff --git a/examples/l3fwd/l3fwd_neon.h b/examples/l3fwd/l3fwd_neon.h
index 40807d5965..bc2bab8265 100644
--- a/examples/l3fwd/l3fwd_neon.h
+++ b/examples/l3fwd/l3fwd_neon.h
@@ -10,6 +10,9 @@
#include "neon/port_group.h"
#include "l3fwd_common.h"
+#undef SENDM_PORT_OVERHEAD
+#define SENDM_PORT_OVERHEAD(x) ((x) + 2 * FWDSTEP)
+
/*
* Update source and destination MAC addresses in the ethernet header.
* Perform RFC1812 checks and updates for IPV4 packets.
@@ -92,7 +95,8 @@ process_packet(struct rte_mbuf *pkt, uint16_t *dst_port)
*/
static __rte_always_inline void
send_packets_multi(struct lcore_conf *qconf, struct rte_mbuf **pkts_burst,
- uint16_t dst_port[MAX_PKT_BURST], int nb_rx)
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)],
+ int nb_rx)
{
int32_t k;
int j = 0;
diff --git a/examples/l3fwd/l3fwd_sse.h b/examples/l3fwd/l3fwd_sse.h
index 083729cdef..6236b7873c 100644
--- a/examples/l3fwd/l3fwd_sse.h
+++ b/examples/l3fwd/l3fwd_sse.h
@@ -10,6 +10,9 @@
#include "sse/port_group.h"
#include "l3fwd_common.h"
+#undef SENDM_PORT_OVERHEAD
+#define SENDM_PORT_OVERHEAD(x) ((x) + 2 * FWDSTEP)
+
/*
* Update source and destination MAC addresses in the ethernet header.
* Perform RFC1812 checks and updates for IPV4 packets.
@@ -91,7 +94,8 @@ process_packet(struct rte_mbuf *pkt, uint16_t *dst_port)
*/
static __rte_always_inline void
send_packets_multi(struct lcore_conf *qconf, struct rte_mbuf **pkts_burst,
- uint16_t dst_port[MAX_PKT_BURST], int nb_rx)
+ uint16_t dst_port[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)],
+ int nb_rx)
{
int32_t k;
int j = 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.967228916 +0800
+++ 0025-examples-l3fwd-fix-read-beyond-boundaries.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From ebab0e8b2257aa049dd35dedc7efd230b0f45b88 Mon Sep 17 00:00:00 2001
+From 66749b0785fc75a28a32fbbcfbf1085f0dabbb53 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit ebab0e8b2257aa049dd35dedc7efd230b0f45b88 ]
@@ -32 +34,0 @@
-Cc: stable@dpdk.org
@@ -38 +39,0 @@
- examples/l3fwd/l3fwd_acl.c | 2 +-
@@ -49 +50 @@
- 12 files changed, 30 insertions(+), 11 deletions(-)
+ 11 files changed, 29 insertions(+), 10 deletions(-)
@@ -52 +53 @@
-index 4894219f2f..4eef08df30 100644
+index 7990edf2d0..879d0fd81c 100644
@@ -55 +56 @@
-@@ -804,7 +804,7 @@ Kirill Rybalchenko <kirill.rybalchenko@intel.com>
+@@ -770,7 +770,7 @@ Kirill Rybalchenko <kirill.rybalchenko@intel.com>
@@ -64,13 +64,0 @@
-diff --git a/examples/l3fwd/l3fwd_acl.c b/examples/l3fwd/l3fwd_acl.c
-index b635011ef7..baa01e6dde 100644
---- a/examples/l3fwd/l3fwd_acl.c
-+++ b/examples/l3fwd/l3fwd_acl.c
-@@ -1056,7 +1056,7 @@ int
- acl_main_loop(__rte_unused void *dummy)
- {
- struct rte_mbuf *pkts_burst[MAX_PKT_BURST];
-- uint16_t hops[MAX_PKT_BURST];
-+ uint16_t hops[SENDM_PORT_OVERHEAD(MAX_PKT_BURST)];
- unsigned int lcore_id;
- uint64_t prev_tsc, diff_tsc, cur_tsc;
- int i, nb_rx;
@@ -146 +134 @@
-index a0eef05a5d..e1eb8c61c8 100644
+index f38b19af3f..a36330119a 100644
@@ -152 +140 @@
- struct rte_ipv6_addr ipv6_arr[nb_rx];
+ uint8_t ipv6_arr[nb_rx][RTE_FIB6_IPV6_ADDR_SIZE];
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/bonding: remove redundant info query' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (24 preceding siblings ...)
2024-12-07 7:59 ` patch 'examples/l3fwd: fix read beyond boundaries' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'examples/ntb: check info query return' " Xueming Li
` (70 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Morten Brørup, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=82954770f817c99a07f54c2b032cddd6536a1a95
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 82954770f817c99a07f54c2b032cddd6536a1a95 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 4 Oct 2024 09:21:48 -0700
Subject: [PATCH] test/bonding: remove redundant info query
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 419daaa2794ca380db2b1267c62c2d5de516b1b3 ]
The patch to check return value of rte_eth_dev_info_get
added a duplicate call in one spot.
Fixes: 773392553bed ("app: check status of getting ethdev info")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
| 1 -
1 file changed, 1 deletion(-)
--git a/app/test/test_link_bonding_rssconf.c b/app/test/test_link_bonding_rssconf.c
index 3c9c824335..2cb689b1de 100644
--- a/app/test/test_link_bonding_rssconf.c
+++ b/app/test/test_link_bonding_rssconf.c
@@ -616,7 +616,6 @@ test_setup(void)
mac_addr.addr_bytes[5] = 0x10 + port->port_id;
rte_eth_dev_default_mac_addr_set(port->port_id, &mac_addr);
- rte_eth_dev_info_get(port->port_id, &port->dev_info);
retval = rte_eth_dev_info_get(port->port_id, &port->dev_info);
TEST_ASSERT((retval == 0),
"Error during getting device (port %u) info: %s\n",
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:44.998243916 +0800
+++ 0026-test-bonding-remove-redundant-info-query.patch 2024-12-06 23:26:43.903044828 +0800
@@ -1 +1 @@
-From 419daaa2794ca380db2b1267c62c2d5de516b1b3 Mon Sep 17 00:00:00 2001
+From 82954770f817c99a07f54c2b032cddd6536a1a95 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 419daaa2794ca380db2b1267c62c2d5de516b1b3 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/ntb: check info query return' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (25 preceding siblings ...)
2024-12-07 7:59 ` patch 'test/bonding: remove redundant info query' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/netvsc: force Tx VLAN offload on 801.2Q packet' " Xueming Li
` (69 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Morten Brørup, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0dbc598e787dcfec98d53e4ede9d941dc24646a0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0dbc598e787dcfec98d53e4ede9d941dc24646a0 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 4 Oct 2024 09:21:53 -0700
Subject: [PATCH] examples/ntb: check info query return
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 07e4dc04d99a99699d71a0a39dd2a7034049e663 ]
The ethdev_info is only valid if rte_eth_dev_info_get returns success.
Fixes: 5194299d6ef5 ("examples/ntb: support more functions")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
examples/ntb/ntb_fwd.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/examples/ntb/ntb_fwd.c b/examples/ntb/ntb_fwd.c
index 95a6148c82..fcf0ec9b56 100644
--- a/examples/ntb/ntb_fwd.c
+++ b/examples/ntb/ntb_fwd.c
@@ -1285,7 +1285,10 @@ main(int argc, char **argv)
eth_port_id = rte_eth_find_next(0);
if (eth_port_id < RTE_MAX_ETHPORTS) {
- rte_eth_dev_info_get(eth_port_id, ðdev_info);
+ ret = rte_eth_dev_info_get(eth_port_id, ðdev_info);
+ if (ret)
+ rte_exit(EXIT_FAILURE, "Can't get info for port %u\n", eth_port_id);
+
eth_pconf.rx_adv_conf.rss_conf.rss_hf &=
ethdev_info.flow_type_rss_offloads;
ret = rte_eth_dev_configure(eth_port_id, num_queues,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.023563616 +0800
+++ 0027-examples-ntb-check-info-query-return.patch 2024-12-06 23:26:43.913044828 +0800
@@ -1 +1 @@
-From 07e4dc04d99a99699d71a0a39dd2a7034049e663 Mon Sep 17 00:00:00 2001
+From 0dbc598e787dcfec98d53e4ede9d941dc24646a0 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 07e4dc04d99a99699d71a0a39dd2a7034049e663 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 56c7672392..37d60208e3 100644
+index 95a6148c82..fcf0ec9b56 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/netvsc: force Tx VLAN offload on 801.2Q packet' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (26 preceding siblings ...)
2024-12-07 7:59 ` patch 'examples/ntb: check info query return' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/vmxnet3: fix crash after configuration failure' " Xueming Li
` (68 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Long Li; +Cc: Xueming Li, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9458e5b98f1d4c71dc0baeb88c0963033c8ebc69
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9458e5b98f1d4c71dc0baeb88c0963033c8ebc69 Mon Sep 17 00:00:00 2001
From: Long Li <longli@microsoft.com>
Date: Fri, 18 Oct 2024 11:13:50 -0700
Subject: [PATCH] net/netvsc: force Tx VLAN offload on 801.2Q packet
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 06c968f9ba8afeaf03b60871a453652a5828ff3f ]
The VSP assumes the packet doesn't have VLAN tags. When VLAN tag is
present in a TX packet, always strip it and use PPI to send VLAN info
through VSP packet.
Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device")
Signed-off-by: Long Li <longli@microsoft.com>
Reviewed-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/netvsc/hn_rxtx.c | 20 +++++++++++++++++++-
1 file changed, 19 insertions(+), 1 deletion(-)
diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c
index 5777a14d70..eea120ae82 100644
--- a/drivers/net/netvsc/hn_rxtx.c
+++ b/drivers/net/netvsc/hn_rxtx.c
@@ -1569,14 +1569,32 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {
struct rte_mbuf *m = tx_pkts[nb_tx];
- uint32_t pkt_size = m->pkt_len + HN_RNDIS_PKT_LEN;
struct rndis_packet_msg *pkt;
struct hn_txdesc *txd;
+ uint32_t pkt_size;
txd = hn_txd_get(txq);
if (txd == NULL)
break;
+ if (!(m->ol_flags & RTE_MBUF_F_TX_VLAN)) {
+ struct rte_ether_hdr *eh =
+ rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
+ struct rte_vlan_hdr *vh;
+
+ /* Force TX vlan offloading for 801.2Q packet */
+ if (eh->ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_VLAN)) {
+ vh = (struct rte_vlan_hdr *)(eh + 1);
+ m->ol_flags |= RTE_MBUF_F_TX_VLAN;
+ m->vlan_tci = rte_be_to_cpu_16(vh->vlan_tci);
+
+ /* Copy ether header over */
+ memmove(rte_pktmbuf_adj(m, sizeof(struct rte_vlan_hdr)),
+ eh, 2 * RTE_ETHER_ADDR_LEN);
+ }
+ }
+ pkt_size = m->pkt_len + HN_RNDIS_PKT_LEN;
+
/* For small packets aggregate them in chimney buffer */
if (m->pkt_len <= hv->tx_copybreak &&
pkt_size <= txq->agg_szmax) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.052428615 +0800
+++ 0028-net-netvsc-force-Tx-VLAN-offload-on-801.2Q-packet.patch 2024-12-06 23:26:43.913044828 +0800
@@ -1 +1 @@
-From 06c968f9ba8afeaf03b60871a453652a5828ff3f Mon Sep 17 00:00:00 2001
+From 9458e5b98f1d4c71dc0baeb88c0963033c8ebc69 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 06c968f9ba8afeaf03b60871a453652a5828ff3f ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 52aedb001f..9d3948e03d 100644
+index 5777a14d70..eea120ae82 100644
@@ -23 +25 @@
-@@ -1557,14 +1557,32 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
+@@ -1569,14 +1569,32 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/vmxnet3: fix crash after configuration failure' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (27 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/netvsc: force Tx VLAN offload on 801.2Q packet' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: remove ROH devices' " Xueming Li
` (67 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Roger Melton; +Cc: Xueming Li, Morten Brørup, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=349d424fd49ca375631073d9da4d08e892a70e49
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 349d424fd49ca375631073d9da4d08e892a70e49 Mon Sep 17 00:00:00 2001
From: Roger Melton <rmelton@cisco.com>
Date: Sat, 26 Oct 2024 10:33:36 -0400
Subject: [PATCH] net/vmxnet3: fix crash after configuration failure
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 439847c154ccf05e1a8bbb955c552921514d31e2 ]
Problem:
If vxmnet3_dev_configure() fails, applications may call
vmxnet3_dev_close(). If the failure occurs before the vmxnet3
hw->shared structure is allocated the close will lead to a segv.
Root Cause:
This crash is due to incorrect adapter_stopped state in the
vmxnet3 dev_private structure. When dev_private is allocated,
adapter_stopped will be 0 (FALSE). eth_vmxnet3_dev_init() does not
set it to TRUE, so it will remain FALSE until a successful
vmxnet3_dev_start() followed by a vmxnet3_dev_stop(). When
vmxnet3_dev_close() is called, it will invoke vmxnet3_dev_stop().
vmxnet3_dev_stop() will check the adapter_stopped state in the
vmxnet3 shared data, find it is FALSE and will proceed to stop the
device, calling vmxnet3_disable_all_intrs().
vmxnet3_disable_all_intrs() attempts to access the vmxnet3 shared data
resulting in the segv.
Solution:
Set adapter_stopped to TRUE in eth_vmxnet3_dev_init(), to prevent stop
processing.
Fixes: dfaff37fc46d ("vmxnet3: import new vmxnet3 poll mode driver implementation")
Signed-off-by: Roger Melton <rmelton@cisco.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index f98cdb6d58..082ae17465 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -402,6 +402,7 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
/* Vendor and Device ID need to be set before init of shared code */
hw->device_id = pci_dev->id.device_id;
hw->vendor_id = pci_dev->id.vendor_id;
+ hw->adapter_stopped = TRUE;
hw->hw_addr0 = (void *)pci_dev->mem_resource[0].addr;
hw->hw_addr1 = (void *)pci_dev->mem_resource[1].addr;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.084068315 +0800
+++ 0029-net-vmxnet3-fix-crash-after-configuration-failure.patch 2024-12-06 23:26:43.913044828 +0800
@@ -1 +1 @@
-From 439847c154ccf05e1a8bbb955c552921514d31e2 Mon Sep 17 00:00:00 2001
+From 349d424fd49ca375631073d9da4d08e892a70e49 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 439847c154ccf05e1a8bbb955c552921514d31e2 ]
@@ -35 +37,0 @@
-Cc: stable@dpdk.org
@@ -44 +46 @@
-index 78fac63ab6..79ab167421 100644
+index f98cdb6d58..082ae17465 100644
@@ -47 +49 @@
-@@ -403,6 +403,7 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
+@@ -402,6 +402,7 @@ eth_vmxnet3_dev_init(struct rte_eth_dev *eth_dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: remove ROH devices' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (28 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/vmxnet3: fix crash after configuration failure' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: fix SWFW mbox' " Xueming Li
` (66 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Dengdui Huang; +Cc: Xueming Li, Jie Hai, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=8e03d87d333c1374e020c63838e527a437ea9a13
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 8e03d87d333c1374e020c63838e527a437ea9a13 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Sat, 26 Oct 2024 14:38:35 +0800
Subject: [PATCH] net/hns3: remove ROH devices
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit feb4548ffd80bf249239d99bf9053ecf78f815d1 ]
The devices added in commit 3f1436d7006c ("net/hns3: support new device")
is no longer available, so revert it.
Fixes: 3f1436d7006c ("net/hns3: support new device")
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Acked-by: Jie Hai <haijie1@huawei.com>
---
drivers/net/hns3/hns3_cmd.c | 4 +---
drivers/net/hns3/hns3_ethdev.c | 2 --
drivers/net/hns3/hns3_ethdev.h | 2 --
3 files changed, 1 insertion(+), 7 deletions(-)
diff --git a/drivers/net/hns3/hns3_cmd.c b/drivers/net/hns3/hns3_cmd.c
index 001ff49b36..2c1664485b 100644
--- a/drivers/net/hns3/hns3_cmd.c
+++ b/drivers/net/hns3/hns3_cmd.c
@@ -545,9 +545,7 @@ hns3_set_dcb_capability(struct hns3_hw *hw)
if (device_id == HNS3_DEV_ID_25GE_RDMA ||
device_id == HNS3_DEV_ID_50GE_RDMA ||
device_id == HNS3_DEV_ID_100G_RDMA_MACSEC ||
- device_id == HNS3_DEV_ID_200G_RDMA ||
- device_id == HNS3_DEV_ID_100G_ROH ||
- device_id == HNS3_DEV_ID_200G_ROH)
+ device_id == HNS3_DEV_ID_200G_RDMA)
hns3_set_bit(hw->capability, HNS3_DEV_SUPPORT_DCB_B, 1);
}
diff --git a/drivers/net/hns3/hns3_ethdev.c b/drivers/net/hns3/hns3_ethdev.c
index 56adde3c66..dde27715c0 100644
--- a/drivers/net/hns3/hns3_ethdev.c
+++ b/drivers/net/hns3/hns3_ethdev.c
@@ -6650,8 +6650,6 @@ static const struct rte_pci_id pci_id_hns3_map[] = {
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_50GE_RDMA) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_RDMA_MACSEC) },
{ RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_200G_RDMA) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_100G_ROH) },
- { RTE_PCI_DEVICE(PCI_VENDOR_ID_HUAWEI, HNS3_DEV_ID_200G_ROH) },
{ .vendor_id = 0, }, /* sentinel */
};
diff --git a/drivers/net/hns3/hns3_ethdev.h b/drivers/net/hns3/hns3_ethdev.h
index c190d5109b..00d226d71c 100644
--- a/drivers/net/hns3/hns3_ethdev.h
+++ b/drivers/net/hns3/hns3_ethdev.h
@@ -28,9 +28,7 @@
#define HNS3_DEV_ID_25GE_RDMA 0xA222
#define HNS3_DEV_ID_50GE_RDMA 0xA224
#define HNS3_DEV_ID_100G_RDMA_MACSEC 0xA226
-#define HNS3_DEV_ID_100G_ROH 0xA227
#define HNS3_DEV_ID_200G_RDMA 0xA228
-#define HNS3_DEV_ID_200G_ROH 0xA22C
#define HNS3_DEV_ID_100G_VF 0xA22E
#define HNS3_DEV_ID_100G_RDMA_PFC_VF 0xA22F
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.115847715 +0800
+++ 0030-net-hns3-remove-ROH-devices.patch 2024-12-06 23:26:43.913044828 +0800
@@ -1 +1 @@
-From feb4548ffd80bf249239d99bf9053ecf78f815d1 Mon Sep 17 00:00:00 2001
+From 8e03d87d333c1374e020c63838e527a437ea9a13 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit feb4548ffd80bf249239d99bf9053ecf78f815d1 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 3c5fdbef8f..146444e2fa 100644
+index 001ff49b36..2c1664485b 100644
@@ -36 +38 @@
-index 365b852969..d748eca0b5 100644
+index 56adde3c66..dde27715c0 100644
@@ -39 +41 @@
-@@ -6651,8 +6651,6 @@ static const struct rte_pci_id pci_id_hns3_map[] = {
+@@ -6650,8 +6650,6 @@ static const struct rte_pci_id pci_id_hns3_map[] = {
@@ -49 +51 @@
-index 799b61038a..7824503bb8 100644
+index c190d5109b..00d226d71c 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/txgbe: fix SWFW mbox' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (29 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/hns3: remove ROH devices' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: fix VF-PF mbox interrupt' " Xueming Li
` (65 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fae1711aba35b0cb6f3a724a2e1446118322fc0a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fae1711aba35b0cb6f3a724a2e1446118322fc0a Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:29:55 +0800
Subject: [PATCH] net/txgbe: fix SWFW mbox
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e389504ed46d84c6a5a6a32b09d6750a182f8725 ]
There is a unknown bug that the register TXGBE_MNGMBX cannot be written
in the loop, when DPDK is built with GCC high version.
Access any register before write TXGBE_MNGMBX can fix it.
Bugzilla ID: 1531
Fixes: 35c90ecccfd4 ("net/txgbe: add EEPROM functions")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/base/txgbe_mng.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/txgbe/base/txgbe_mng.c b/drivers/net/txgbe/base/txgbe_mng.c
index 029a0a1fe1..9770c88bc8 100644
--- a/drivers/net/txgbe/base/txgbe_mng.c
+++ b/drivers/net/txgbe/base/txgbe_mng.c
@@ -58,6 +58,7 @@ txgbe_hic_unlocked(struct txgbe_hw *hw, u32 *buffer, u32 length, u32 timeout)
dword_len = length >> 2;
+ txgbe_flush(hw);
/* The device driver writes the relevant command block
* into the ram area.
*/
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.152355814 +0800
+++ 0031-net-txgbe-fix-SWFW-mbox.patch 2024-12-06 23:26:43.913044828 +0800
@@ -1 +1 @@
-From e389504ed46d84c6a5a6a32b09d6750a182f8725 Mon Sep 17 00:00:00 2001
+From fae1711aba35b0cb6f3a724a2e1446118322fc0a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e389504ed46d84c6a5a6a32b09d6750a182f8725 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 20db982891..7dc8f21183 100644
+index 029a0a1fe1..9770c88bc8 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/txgbe: fix VF-PF mbox interrupt' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (30 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/txgbe: fix SWFW mbox' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: remove outer UDP checksum capability' " Xueming Li
` (64 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=21f80eccc9f51b21660e319c65542347e4f7b2c1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 21f80eccc9f51b21660e319c65542347e4f7b2c1 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:29:56 +0800
Subject: [PATCH] net/txgbe: fix VF-PF mbox interrupt
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5a4ce69701fc01f23a2769c8afff055d87eff864 ]
There was a incorrect bit to define TXGBE_ICRMISC_VFMBX that prevents
the interrupt from being handled correctly.
Fixes: a6712cd029a4 ("net/txgbe: add PF module init and uninit for SRIOV")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/base/txgbe_regs.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/txgbe/base/txgbe_regs.h b/drivers/net/txgbe/base/txgbe_regs.h
index a2984f1106..db02b1b81b 100644
--- a/drivers/net/txgbe/base/txgbe_regs.h
+++ b/drivers/net/txgbe/base/txgbe_regs.h
@@ -1197,7 +1197,7 @@ enum txgbe_5tuple_protocol {
#define TXGBE_ICRMISC_ANDONE MS(19, 0x1) /* link auto-nego done */
#define TXGBE_ICRMISC_ERRIG MS(20, 0x1) /* integrity error */
#define TXGBE_ICRMISC_SPI MS(21, 0x1) /* SPI interface */
-#define TXGBE_ICRMISC_VFMBX MS(22, 0x1) /* VF-PF message box */
+#define TXGBE_ICRMISC_VFMBX MS(23, 0x1) /* VF-PF message box */
#define TXGBE_ICRMISC_GPIO MS(26, 0x1) /* GPIO interrupt */
#define TXGBE_ICRMISC_ERRPCI MS(27, 0x1) /* pcie request error */
#define TXGBE_ICRMISC_HEAT MS(28, 0x1) /* overheat detection */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.185903214 +0800
+++ 0032-net-txgbe-fix-VF-PF-mbox-interrupt.patch 2024-12-06 23:26:43.923044828 +0800
@@ -1 +1 @@
-From 5a4ce69701fc01f23a2769c8afff055d87eff864 Mon Sep 17 00:00:00 2001
+From 21f80eccc9f51b21660e319c65542347e4f7b2c1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5a4ce69701fc01f23a2769c8afff055d87eff864 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 4ea4a2e3d8..b46d65331e 100644
+index a2984f1106..db02b1b81b 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/txgbe: remove outer UDP checksum capability' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (31 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/txgbe: fix VF-PF mbox interrupt' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: fix driver load bit to inform firmware' " Xueming Li
` (63 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3bf0166401d85446793ce43da27d874375f1e2e1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3bf0166401d85446793ce43da27d874375f1e2e1 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:29:57 +0800
Subject: [PATCH] net/txgbe: remove outer UDP checksum capability
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 25fe1c780d39ea3637ba8407f6e9a9800135becd ]
The hardware does not support outer UDP checksum for tunnel packets.
It's wrong to claim this Tx offload capability, so fix it.
Bugzilla ID: 1529
Fixes: b950203be7f1 ("net/txgbe: support VXLAN-GPE")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_rxtx.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/txgbe/txgbe_rxtx.c b/drivers/net/txgbe/txgbe_rxtx.c
index 2efc2bcf29..dcea5c23e2 100644
--- a/drivers/net/txgbe/txgbe_rxtx.c
+++ b/drivers/net/txgbe/txgbe_rxtx.c
@@ -2254,8 +2254,7 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MACSEC_INSERT;
- tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM |
- RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM;
+ tx_offload_capa |= RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM;
#ifdef RTE_LIB_SECURITY
if (dev->security_ctx)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.218575814 +0800
+++ 0033-net-txgbe-remove-outer-UDP-checksum-capability.patch 2024-12-06 23:26:43.923044828 +0800
@@ -1 +1 @@
-From 25fe1c780d39ea3637ba8407f6e9a9800135becd Mon Sep 17 00:00:00 2001
+From 3bf0166401d85446793ce43da27d874375f1e2e1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 25fe1c780d39ea3637ba8407f6e9a9800135becd ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 5bc0f8772f..c12726553c 100644
+index 2efc2bcf29..dcea5c23e2 100644
@@ -22 +24 @@
-@@ -2284,8 +2284,7 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
+@@ -2254,8 +2254,7 @@ txgbe_get_tx_port_offloads(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/txgbe: fix driver load bit to inform firmware' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (32 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/txgbe: remove outer UDP checksum capability' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: " Xueming Li
` (62 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a6f62c575b8a25b727da665ee8659d31e8f1fb9c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a6f62c575b8a25b727da665ee8659d31e8f1fb9c Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:29:58 +0800
Subject: [PATCH] net/txgbe: fix driver load bit to inform firmware
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0a8f064bbc2cf4978857eae84e86c6b2c9e65feb ]
Drv_load bit will be reset to default 0 after hardware LAN reset,
reconfigure it to inform firmware that driver is loaded. And set it to 0
when device is closed.
Fixes: b1f596677d8e ("net/txgbe: support device start")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_ethdev.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index a8bdc10232..9b44f11465 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -331,6 +331,8 @@ txgbe_pf_reset_hw(struct txgbe_hw *hw)
status = hw->mac.reset_hw(hw);
ctrl_ext = rd32(hw, TXGBE_PORTCTL);
+ /* let hardware know driver is loaded */
+ ctrl_ext |= TXGBE_PORTCTL_DRVLOAD;
/* Set PF Reset Done bit so PF/VF Mail Ops can work */
ctrl_ext |= TXGBE_PORTCTL_RSTDONE;
wr32(hw, TXGBE_PORTCTL, ctrl_ext);
@@ -2059,6 +2061,9 @@ txgbe_dev_close(struct rte_eth_dev *dev)
ret = txgbe_dev_stop(dev);
+ /* Let firmware take over control of hardware */
+ wr32m(hw, TXGBE_PORTCTL, TXGBE_PORTCTL_DRVLOAD, 0);
+
txgbe_dev_free_queues(dev);
txgbe_set_pcie_master(hw, false);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.249952213 +0800
+++ 0034-net-txgbe-fix-driver-load-bit-to-inform-firmware.patch 2024-12-06 23:26:43.923044828 +0800
@@ -1 +1 @@
-From 0a8f064bbc2cf4978857eae84e86c6b2c9e65feb Mon Sep 17 00:00:00 2001
+From a6f62c575b8a25b727da665ee8659d31e8f1fb9c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0a8f064bbc2cf4978857eae84e86c6b2c9e65feb ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 2834468764..4aa3bfd0bc 100644
+index a8bdc10232..9b44f11465 100644
@@ -31 +33 @@
-@@ -2061,6 +2063,9 @@ txgbe_dev_close(struct rte_eth_dev *dev)
+@@ -2059,6 +2061,9 @@ txgbe_dev_close(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ngbe: fix driver load bit to inform firmware' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (33 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/txgbe: fix driver load bit to inform firmware' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: reconfigure more MAC Rx registers' " Xueming Li
` (61 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e27b87fc9ce58d6f3ad1c19c90b447ca8feeed2a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e27b87fc9ce58d6f3ad1c19c90b447ca8feeed2a Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:30:04 +0800
Subject: [PATCH] net/ngbe: fix driver load bit to inform firmware
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit cb7be5b510ef0995fa171832f0e0994f667e2161 ]
Drv_load bit will be reset to default 0 after hardware LAN reset,
reconfigure it to inform firmware that driver is loaded. And set it to 0
when device is closed.
Fixes: 3518df5774c7 ("net/ngbe: support device start/stop")
Fixes: cc63194e89cb ("net/ngbe: support close and reset device")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 4321924cb9..48b58284a6 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -263,6 +263,8 @@ ngbe_pf_reset_hw(struct ngbe_hw *hw)
status = hw->mac.reset_hw(hw);
ctrl_ext = rd32(hw, NGBE_PORTCTL);
+ /* let hardware know driver is loaded */
+ ctrl_ext |= NGBE_PORTCTL_DRVLOAD;
/* Set PF Reset Done bit so PF/VF Mail Ops can work */
ctrl_ext |= NGBE_PORTCTL_RSTDONE;
wr32(hw, NGBE_PORTCTL, ctrl_ext);
@@ -1269,6 +1271,9 @@ ngbe_dev_close(struct rte_eth_dev *dev)
ngbe_dev_stop(dev);
+ /* Let firmware take over control of hardware */
+ wr32m(hw, NGBE_PORTCTL, NGBE_PORTCTL_DRVLOAD, 0);
+
ngbe_dev_free_queues(dev);
ngbe_set_pcie_master(hw, false);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.281260313 +0800
+++ 0035-net-ngbe-fix-driver-load-bit-to-inform-firmware.patch 2024-12-06 23:26:43.923044828 +0800
@@ -1 +1 @@
-From cb7be5b510ef0995fa171832f0e0994f667e2161 Mon Sep 17 00:00:00 2001
+From e27b87fc9ce58d6f3ad1c19c90b447ca8feeed2a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit cb7be5b510ef0995fa171832f0e0994f667e2161 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 353d17acc8..238533f2b8 100644
+index 4321924cb9..48b58284a6 100644
@@ -32 +34 @@
-@@ -1277,6 +1279,9 @@ ngbe_dev_close(struct rte_eth_dev *dev)
+@@ -1269,6 +1271,9 @@ ngbe_dev_close(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ngbe: reconfigure more MAC Rx registers' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (34 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/ngbe: " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: fix interrupt lost in legacy or MSI mode' " Xueming Li
` (60 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5247bd2c4fb7430fb9a80adcd1f6a4fc42331190
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5247bd2c4fb7430fb9a80adcd1f6a4fc42331190 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:30:05 +0800
Subject: [PATCH] net/ngbe: reconfigure more MAC Rx registers
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b8d52e1084a17c7ef83624f3bbd11a090e7b2267 ]
When link status changes, there is a probability that no more packets
can be received on the port, due to hardware defects. These MAC Rx
registers should be reconfigured to fix this problem.
Fixes: b9246b8fa280 ("net/ngbe: support link update")
Fixes: a7c5f95ed9c2 ("net/ngbe: reconfigure MAC Rx when link update")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/base/ngbe_regs.h | 2 ++
drivers/net/ngbe/ngbe_ethdev.c | 6 ++++++
2 files changed, 8 insertions(+)
diff --git a/drivers/net/ngbe/base/ngbe_regs.h b/drivers/net/ngbe/base/ngbe_regs.h
index c0e79a2ba7..0d820f4079 100644
--- a/drivers/net/ngbe/base/ngbe_regs.h
+++ b/drivers/net/ngbe/base/ngbe_regs.h
@@ -712,6 +712,8 @@ enum ngbe_5tuple_protocol {
#define NGBE_MACRXFLT_CTL_PASS LS(3, 6, 0x3)
#define NGBE_MACRXFLT_RXALL MS(31, 0x1)
+#define NGBE_MAC_WDG_TIMEOUT 0x01100C
+
/******************************************************************************
* Statistic Registers
******************************************************************************/
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index 48b58284a6..e3772a7396 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -1916,6 +1916,7 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
bool link_up;
int err;
int wait = 1;
+ u32 reg;
memset(&link, 0, sizeof(link));
link.link_status = RTE_ETH_LINK_DOWN;
@@ -1973,8 +1974,13 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
wr32m(hw, NGBE_MACTXCFG, NGBE_MACTXCFG_SPEED_MASK,
NGBE_MACTXCFG_SPEED_1G | NGBE_MACTXCFG_TE);
}
+ /* Re configure MAC RX */
+ reg = rd32(hw, NGBE_MACRXCFG);
+ wr32(hw, NGBE_MACRXCFG, reg);
wr32m(hw, NGBE_MACRXFLT, NGBE_MACRXFLT_PROMISC,
NGBE_MACRXFLT_PROMISC);
+ reg = rd32(hw, NGBE_MAC_WDG_TIMEOUT);
+ wr32(hw, NGBE_MAC_WDG_TIMEOUT, reg);
}
return rte_eth_linkstatus_set(dev, &link);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.310797812 +0800
+++ 0036-net-ngbe-reconfigure-more-MAC-Rx-registers.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From b8d52e1084a17c7ef83624f3bbd11a090e7b2267 Mon Sep 17 00:00:00 2001
+From 5247bd2c4fb7430fb9a80adcd1f6a4fc42331190 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b8d52e1084a17c7ef83624f3bbd11a090e7b2267 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 8a6776b0e6..b1295280a7 100644
+index c0e79a2ba7..0d820f4079 100644
@@ -34 +36 @@
-index 238533f2b8..c372fd928c 100644
+index 48b58284a6..e3772a7396 100644
@@ -37 +39 @@
-@@ -1941,6 +1941,7 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
+@@ -1916,6 +1916,7 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
@@ -45 +47 @@
-@@ -1998,8 +1999,13 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
+@@ -1973,8 +1974,13 @@ ngbe_dev_link_update_share(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ngbe: fix interrupt lost in legacy or MSI mode' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (35 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/ngbe: reconfigure more MAC Rx registers' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: restrict configuration of VLAN strip offload' " Xueming Li
` (59 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fe2506753d5831153950339060b9b1c43aee13d5
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fe2506753d5831153950339060b9b1c43aee13d5 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:30:06 +0800
Subject: [PATCH] net/ngbe: fix interrupt lost in legacy or MSI mode
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 68f04c0aa79316de333441e7efdadd2876412ffa ]
When interrupt is legacy or MSI mode, shared interrupt may cause the
interrupt cannot be re-enabled. So fix to read the shared interrupt.
Fixes: b9246b8fa280 ("net/ngbe: support link update")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index e3772a7396..f48894bdb4 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -2168,6 +2168,19 @@ ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
struct ngbe_hw *hw = ngbe_dev_hw(dev);
struct ngbe_interrupt *intr = ngbe_dev_intr(dev);
+ eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_VEC0];
+ if (!eicr) {
+ /*
+ * shared interrupt alert!
+ * make sure interrupts are enabled because the read will
+ * have disabled interrupts.
+ */
+ if (!hw->adapter_stopped)
+ ngbe_enable_intr(dev);
+ return 0;
+ }
+ ((u32 *)hw->isb_mem)[NGBE_ISB_VEC0] = 0;
+
/* read-on-clear nic registers here */
eicr = ((u32 *)hw->isb_mem)[NGBE_ISB_MISC];
PMD_DRV_LOG(DEBUG, "eicr %x", eicr);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.343923112 +0800
+++ 0037-net-ngbe-fix-interrupt-lost-in-legacy-or-MSI-mode.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From 68f04c0aa79316de333441e7efdadd2876412ffa Mon Sep 17 00:00:00 2001
+From fe2506753d5831153950339060b9b1c43aee13d5 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 68f04c0aa79316de333441e7efdadd2876412ffa ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index c372fd928c..325a9d1eaf 100644
+index e3772a7396..f48894bdb4 100644
@@ -21 +23 @@
-@@ -2193,6 +2193,19 @@ ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
+@@ -2168,6 +2168,19 @@ ngbe_dev_interrupt_get_status(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/ngbe: restrict configuration of VLAN strip offload' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (36 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/ngbe: fix interrupt lost in legacy or MSI mode' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/vmxnet3: fix potential out of bounds stats access' " Xueming Li
` (58 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6002cd28d361d0905b7b255c0ba8a62c5a3ca137
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6002cd28d361d0905b7b255c0ba8a62c5a3ca137 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Mon, 4 Nov 2024 10:30:07 +0800
Subject: [PATCH] net/ngbe: restrict configuration of VLAN strip offload
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit baca8ec066dc6fdc42374e8eafd67eecfd6c9267 ]
There is a hardware limitation that Rx ring config register is not
writable when Rx ring is enabled, i.e. the NGBE_RXCFG_ENA bit is set.
But disabling the ring when there is traffic will cause ring get stuck.
So restrict the configuration of VLAN strip offload only if device is
started.
Fixes: 59b46438fdaa ("net/ngbe: support VLAN offload and VLAN filter")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/ngbe/ngbe_ethdev.c | 49 ++++++++++++++--------------------
1 file changed, 20 insertions(+), 29 deletions(-)
diff --git a/drivers/net/ngbe/ngbe_ethdev.c b/drivers/net/ngbe/ngbe_ethdev.c
index f48894bdb4..aca6c2aaa1 100644
--- a/drivers/net/ngbe/ngbe_ethdev.c
+++ b/drivers/net/ngbe/ngbe_ethdev.c
@@ -584,41 +584,25 @@ ngbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
}
static void
-ngbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+ngbe_vlan_strip_q_set(struct rte_eth_dev *dev, uint16_t queue, int on)
{
- struct ngbe_hw *hw = ngbe_dev_hw(dev);
- struct ngbe_rx_queue *rxq;
- bool restart;
- uint32_t rxcfg, rxbal, rxbah;
-
if (on)
ngbe_vlan_hw_strip_enable(dev, queue);
else
ngbe_vlan_hw_strip_disable(dev, queue);
+}
- rxq = dev->data->rx_queues[queue];
- rxbal = rd32(hw, NGBE_RXBAL(rxq->reg_idx));
- rxbah = rd32(hw, NGBE_RXBAH(rxq->reg_idx));
- rxcfg = rd32(hw, NGBE_RXCFG(rxq->reg_idx));
- if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) {
- restart = (rxcfg & NGBE_RXCFG_ENA) &&
- !(rxcfg & NGBE_RXCFG_VLAN);
- rxcfg |= NGBE_RXCFG_VLAN;
- } else {
- restart = (rxcfg & NGBE_RXCFG_ENA) &&
- (rxcfg & NGBE_RXCFG_VLAN);
- rxcfg &= ~NGBE_RXCFG_VLAN;
- }
- rxcfg &= ~NGBE_RXCFG_ENA;
+static void
+ngbe_vlan_strip_queue_set(struct rte_eth_dev *dev, uint16_t queue, int on)
+{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
- if (restart) {
- /* set vlan strip for ring */
- ngbe_dev_rx_queue_stop(dev, queue);
- wr32(hw, NGBE_RXBAL(rxq->reg_idx), rxbal);
- wr32(hw, NGBE_RXBAH(rxq->reg_idx), rxbah);
- wr32(hw, NGBE_RXCFG(rxq->reg_idx), rxcfg);
- ngbe_dev_rx_queue_start(dev, queue);
+ if (!hw->adapter_stopped) {
+ PMD_DRV_LOG(ERR, "Please stop port first");
+ return;
}
+
+ ngbe_vlan_strip_q_set(dev, queue, on);
}
static int
@@ -844,9 +828,9 @@ ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
rxq = dev->data->rx_queues[i];
if (rxq->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP)
- ngbe_vlan_hw_strip_enable(dev, i);
+ ngbe_vlan_strip_q_set(dev, i, 1);
else
- ngbe_vlan_hw_strip_disable(dev, i);
+ ngbe_vlan_strip_q_set(dev, i, 0);
}
}
@@ -908,6 +892,13 @@ ngbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
static int
ngbe_vlan_offload_set(struct rte_eth_dev *dev, int mask)
{
+ struct ngbe_hw *hw = ngbe_dev_hw(dev);
+
+ if (!hw->adapter_stopped && (mask & RTE_ETH_VLAN_STRIP_MASK)) {
+ PMD_DRV_LOG(ERR, "Please stop port first");
+ return -EPERM;
+ }
+
ngbe_config_vlan_strip_on_all_queues(dev, mask);
ngbe_vlan_offload_config(dev, mask);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.374681312 +0800
+++ 0038-net-ngbe-restrict-configuration-of-VLAN-strip-offloa.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From baca8ec066dc6fdc42374e8eafd67eecfd6c9267 Mon Sep 17 00:00:00 2001
+From 6002cd28d361d0905b7b255c0ba8a62c5a3ca137 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit baca8ec066dc6fdc42374e8eafd67eecfd6c9267 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 325a9d1eaf..08e87471f6 100644
+index f48894bdb4..aca6c2aaa1 100644
@@ -24 +26 @@
-@@ -586,41 +586,25 @@ ngbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
+@@ -584,41 +584,25 @@ ngbe_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on)
@@ -77 +79 @@
-@@ -846,9 +830,9 @@ ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
+@@ -844,9 +828,9 @@ ngbe_vlan_hw_strip_config(struct rte_eth_dev *dev)
@@ -89 +91 @@
-@@ -910,6 +894,13 @@ ngbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
+@@ -908,6 +892,13 @@ ngbe_vlan_offload_config(struct rte_eth_dev *dev, int mask)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/vmxnet3: fix potential out of bounds stats access' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (37 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/ngbe: restrict configuration of VLAN strip offload' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/vmxnet3: support larger MTU with version 6' " Xueming Li
` (57 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Morten Brørup; +Cc: Xueming Li, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6a832ea8eef5041729951f20c670daaedfd29171
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6a832ea8eef5041729951f20c670daaedfd29171 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>
Date: Mon, 4 Nov 2024 10:52:19 +0000
Subject: [PATCH] net/vmxnet3: fix potential out of bounds stats access
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit d3a229dd493abcb29d5717c5ce37e0a0bc1777c4 ]
With virtual hardware version 6, the max number of RX queues was
increased to VMXNET3_EXT_MAX_RX_QUEUES (32) from VMXNET3_MAX_RX_QUEUES
(16), similarly, the max number of TX queues was increased to
VMXNET3_EXT_MAX_TX_QUEUES (32) from VMXNET3_MAX_TX_QUEUES (8). These
increases were not fully considered in the PMD...
The configured number of queues to provide statistics for
(RTE_ETHDEV_QUEUE_STAT_CNTRS) can be smaller than driver's max number of
supported transmit queues for virtual hardware version 6
(VMXNET3_EXT_MAX_RX_QUEUES) (32), which will cause accessing the queue
stats array out of boundary if the application uses more than
RTE_ETHDEV_QUEUE_STAT_CNTRS queues.
This patch fixes this by two modifications
- Increased stats array size to support hardware version 6.
- Respect RTE_ETHDEV_QUEUE_STAT_CNTRS when getting the per-queue
counters.
The build time check
RTE_BUILD_BUG_ON(RTE_ETHDEV_QUEUE_STAT_CNTRS < VMXNET3_MAX_TX_QUEUES)
has become irrelevant, so it is removed.
With this removal, per-queue stats for fewer queues is supported.
Fixes: b1584dd0affe ("net/vmxnet3: support version 6")
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 32 +++++++++++++++++-----------
drivers/net/vmxnet3/vmxnet3_ethdev.h | 4 ++--
2 files changed, 22 insertions(+), 14 deletions(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index 082ae17465..e60935f835 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1470,42 +1470,52 @@ vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
struct vmxnet3_hw *hw = dev->data->dev_private;
struct UPT1_TxStats txStats;
struct UPT1_RxStats rxStats;
+ uint64_t packets, bytes;
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS);
for (i = 0; i < hw->num_tx_queues; i++) {
vmxnet3_tx_stats_get(hw, i, &txStats);
- stats->q_opackets[i] = txStats.ucastPktsTxOK +
+ packets = txStats.ucastPktsTxOK +
txStats.mcastPktsTxOK +
txStats.bcastPktsTxOK;
- stats->q_obytes[i] = txStats.ucastBytesTxOK +
+ bytes = txStats.ucastBytesTxOK +
txStats.mcastBytesTxOK +
txStats.bcastBytesTxOK;
- stats->opackets += stats->q_opackets[i];
- stats->obytes += stats->q_obytes[i];
+ stats->opackets += packets;
+ stats->obytes += bytes;
stats->oerrors += txStats.pktsTxError + txStats.pktsTxDiscard;
+
+ if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_opackets[i] = packets;
+ stats->q_obytes[i] = bytes;
+ }
}
for (i = 0; i < hw->num_rx_queues; i++) {
vmxnet3_rx_stats_get(hw, i, &rxStats);
- stats->q_ipackets[i] = rxStats.ucastPktsRxOK +
+ packets = rxStats.ucastPktsRxOK +
rxStats.mcastPktsRxOK +
rxStats.bcastPktsRxOK;
- stats->q_ibytes[i] = rxStats.ucastBytesRxOK +
+ bytes = rxStats.ucastBytesRxOK +
rxStats.mcastBytesRxOK +
rxStats.bcastBytesRxOK;
- stats->ipackets += stats->q_ipackets[i];
- stats->ibytes += stats->q_ibytes[i];
-
- stats->q_errors[i] = rxStats.pktsRxError;
+ stats->ipackets += packets;
+ stats->ibytes += bytes;
stats->ierrors += rxStats.pktsRxError;
stats->imissed += rxStats.pktsRxOutOfBuf;
+
+ if (i < RTE_ETHDEV_QUEUE_STAT_CNTRS) {
+ stats->q_ipackets[i] = packets;
+ stats->q_ibytes[i] = bytes;
+ stats->q_errors[i] = rxStats.pktsRxError;
+ }
}
return 0;
@@ -1521,8 +1531,6 @@ vmxnet3_dev_stats_reset(struct rte_eth_dev *dev)
VMXNET3_WRITE_BAR1_REG(hw, VMXNET3_REG_CMD, VMXNET3_CMD_GET_STATS);
- RTE_BUILD_BUG_ON(RTE_ETHDEV_QUEUE_STAT_CNTRS < VMXNET3_MAX_TX_QUEUES);
-
for (i = 0; i < hw->num_tx_queues; i++) {
vmxnet3_hw_tx_stats_get(hw, i, &txStats);
memcpy(&hw->snapshot_tx_stats[i], &txStats,
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.h b/drivers/net/vmxnet3/vmxnet3_ethdev.h
index 2b3e2c4caa..e9ded6663d 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.h
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.h
@@ -121,8 +121,8 @@ struct vmxnet3_hw {
#define VMXNET3_VFT_TABLE_SIZE (VMXNET3_VFT_SIZE * sizeof(uint32_t))
UPT1_TxStats saved_tx_stats[VMXNET3_EXT_MAX_TX_QUEUES];
UPT1_RxStats saved_rx_stats[VMXNET3_EXT_MAX_RX_QUEUES];
- UPT1_TxStats snapshot_tx_stats[VMXNET3_MAX_TX_QUEUES];
- UPT1_RxStats snapshot_rx_stats[VMXNET3_MAX_RX_QUEUES];
+ UPT1_TxStats snapshot_tx_stats[VMXNET3_EXT_MAX_TX_QUEUES];
+ UPT1_RxStats snapshot_rx_stats[VMXNET3_EXT_MAX_RX_QUEUES];
uint16_t tx_prod_offset;
uint16_t rx_prod_offset[2];
/* device capability bit map */
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.410171111 +0800
+++ 0039-net-vmxnet3-fix-potential-out-of-bounds-stats-access.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From d3a229dd493abcb29d5717c5ce37e0a0bc1777c4 Mon Sep 17 00:00:00 2001
+From 6a832ea8eef5041729951f20c670daaedfd29171 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit d3a229dd493abcb29d5717c5ce37e0a0bc1777c4 ]
@@ -33 +35,0 @@
-Cc: stable@dpdk.org
@@ -43 +45 @@
-index 79ab167421..619cfa21cf 100644
+index 082ae17465..e60935f835 100644
@@ -46 +48 @@
-@@ -1471,42 +1471,52 @@ vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
+@@ -1470,42 +1470,52 @@ vmxnet3_dev_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)
@@ -109 +111 @@
-@@ -1522,8 +1532,6 @@ vmxnet3_dev_stats_reset(struct rte_eth_dev *dev)
+@@ -1521,8 +1531,6 @@ vmxnet3_dev_stats_reset(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/vmxnet3: support larger MTU with version 6' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (38 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/vmxnet3: fix potential out of bounds stats access' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: fix error code for repeatedly create counter' " Xueming Li
` (56 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Morten Brørup; +Cc: Xueming Li, Jochen Behrens, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5bab0b75fb373232aab6a4146bff851353bd96c3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5bab0b75fb373232aab6a4146bff851353bd96c3 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Morten=20Br=C3=B8rup?= <mb@smartsharesystems.com>
Date: Mon, 4 Nov 2024 10:52:20 +0000
Subject: [PATCH] net/vmxnet3: support larger MTU with version 6
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a4b83a747d7d21c6d61b9ae69d39db5e1c700dcd ]
Virtual hardware version 6 supports larger max MTU, but the device
information (dev_info) did not reflect this, so it could not be used.
Fixes: b1584dd0affe ("net/vmxnet3: support version 6")
Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Jochen Behrens <jochen.behrens@broadcom.com>
---
drivers/net/vmxnet3/vmxnet3_ethdev.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index e60935f835..8305c27d15 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1574,7 +1574,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
dev_info->min_mtu = VMXNET3_MIN_MTU;
- dev_info->max_mtu = VMXNET3_MAX_MTU;
+ dev_info->max_mtu = VMXNET3_VERSION_GE_6(hw) ? VMXNET3_V6_MAX_MTU : VMXNET3_MAX_MTU;
dev_info->speed_capa = RTE_ETH_LINK_SPEED_10G;
dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.439755011 +0800
+++ 0040-net-vmxnet3-support-larger-MTU-with-version-6.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From a4b83a747d7d21c6d61b9ae69d39db5e1c700dcd Mon Sep 17 00:00:00 2001
+From 5bab0b75fb373232aab6a4146bff851353bd96c3 Mon Sep 17 00:00:00 2001
@@ -7,0 +8,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a4b83a747d7d21c6d61b9ae69d39db5e1c700dcd ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 619cfa21cf..15ca25b187 100644
+index e60935f835..8305c27d15 100644
@@ -25 +27 @@
-@@ -1575,7 +1575,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
+@@ -1574,7 +1574,7 @@ vmxnet3_dev_info_get(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: fix error code for repeatedly create counter' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (39 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/vmxnet3: support larger MTU with version 6' " Xueming Li
@ 2024-12-07 7:59 ` Xueming Li
2024-12-07 8:00 ` patch 'net/hns3: fix fully use hardware flow director table' " Xueming Li
` (55 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 7:59 UTC (permalink / raw)
To: Dengdui Huang; +Cc: Xueming Li, Jie Hai, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3bd77d7b2487c36917e5af8bfe057b69759990e9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3bd77d7b2487c36917e5af8bfe057b69759990e9 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Thu, 7 Nov 2024 19:56:44 +0800
Subject: [PATCH] net/hns3: fix error code for repeatedly create counter
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 585f1f68f18c7acbc4f920053cbf4ba888e0c271 ]
Return EINVAL instead of ENOSPC when the same counter ID is
used for multiple times to create a counter.
Fixes: fcba820d9b9e ("net/hns3: support flow director")
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/hns3/hns3_flow.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/hns3/hns3_flow.c b/drivers/net/hns3/hns3_flow.c
index a32fb10ddf..db318854af 100644
--- a/drivers/net/hns3/hns3_flow.c
+++ b/drivers/net/hns3/hns3_flow.c
@@ -283,7 +283,7 @@ hns3_counter_new(struct rte_eth_dev *dev, uint32_t indirect, uint32_t id,
cnt = hns3_counter_lookup(dev, id);
if (cnt) {
if (!cnt->indirect || cnt->indirect != indirect)
- return rte_flow_error_set(error, ENOTSUP,
+ return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
cnt,
"Counter id is used, indirect flag not match");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.474437711 +0800
+++ 0041-net-hns3-fix-error-code-for-repeatedly-create-counte.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From 585f1f68f18c7acbc4f920053cbf4ba888e0c271 Mon Sep 17 00:00:00 2001
+From 3bd77d7b2487c36917e5af8bfe057b69759990e9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 585f1f68f18c7acbc4f920053cbf4ba888e0c271 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 192ffc015e..266934b45b 100644
+index a32fb10ddf..db318854af 100644
@@ -23 +25 @@
-@@ -286,7 +286,7 @@ hns3_counter_new(struct rte_eth_dev *dev, uint32_t indirect, uint32_t id,
+@@ -283,7 +283,7 @@ hns3_counter_new(struct rte_eth_dev *dev, uint32_t indirect, uint32_t id,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/hns3: fix fully use hardware flow director table' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (40 preceding siblings ...)
2024-12-07 7:59 ` patch 'net/hns3: fix error code for repeatedly create counter' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'event/octeontx: fix possible integer overflow' " Xueming Li
` (54 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Dengdui Huang; +Cc: Xueming Li, Jie Hai, Stephen Hemminger, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=50a97b8bc45fe9f2b1ba934029eaac251c146ae6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 50a97b8bc45fe9f2b1ba934029eaac251c146ae6 Mon Sep 17 00:00:00 2001
From: Dengdui Huang <huangdengdui@huawei.com>
Date: Thu, 7 Nov 2024 19:56:45 +0800
Subject: [PATCH] net/hns3: fix fully use hardware flow director table
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b8e60c33168a2999604c17322dd0198a6746428f ]
The hns3 driver checks whether the flow rule is repeatedly inserted
based on rte_hash. Currently, the rte_hash extendable bucket table
feature is not enabled. When there are many hash conflicts, the hash
table space cannot be fully used. So the flow rule maybe cannot be
inserted even if the hardware flow director table there are still free.
This patch fix it by enabling the rte_hash extensible bucket table
feature.
Fixes: fcba820d9b9e ("net/hns3: support flow director")
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com>
Signed-off-by: Jie Hai <haijie1@huawei.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/hns3/hns3_fdir.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/net/hns3/hns3_fdir.c b/drivers/net/hns3/hns3_fdir.c
index d100e58d10..75a200c713 100644
--- a/drivers/net/hns3/hns3_fdir.c
+++ b/drivers/net/hns3/hns3_fdir.c
@@ -836,6 +836,7 @@ int hns3_fdir_filter_init(struct hns3_adapter *hns)
.key_len = sizeof(struct hns3_fdir_key_conf),
.hash_func = rte_hash_crc,
.hash_func_init_val = 0,
+ .extra_flag = RTE_HASH_EXTRA_FLAGS_EXT_TABLE,
};
int ret;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.507857710 +0800
+++ 0042-net-hns3-fix-fully-use-hardware-flow-director-table.patch 2024-12-06 23:26:43.933044828 +0800
@@ -1 +1 @@
-From b8e60c33168a2999604c17322dd0198a6746428f Mon Sep 17 00:00:00 2001
+From 50a97b8bc45fe9f2b1ba934029eaac251c146ae6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b8e60c33168a2999604c17322dd0198a6746428f ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index d18d083535..aacad40e61 100644
+index d100e58d10..75a200c713 100644
@@ -28 +30 @@
-@@ -900,6 +900,7 @@ int hns3_fdir_filter_init(struct hns3_adapter *hns)
+@@ -836,6 +836,7 @@ int hns3_fdir_filter_init(struct hns3_adapter *hns)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'event/octeontx: fix possible integer overflow' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (41 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/hns3: fix fully use hardware flow director table' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'baseband/acc: fix ring memory allocation' " Xueming Li
` (53 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Hanumanth Pothula; +Cc: Xueming Li, Ali Alnubani, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=97660f027b345b63678cf4b6fc0c5b3bdce6a19d
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 97660f027b345b63678cf4b6fc0c5b3bdce6a19d Mon Sep 17 00:00:00 2001
From: Hanumanth Pothula <hpothula@marvell.com>
Date: Fri, 25 Oct 2024 16:28:02 +0530
Subject: [PATCH] event/octeontx: fix possible integer overflow
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3e86eee028c69b98144e2c62ec48091467e790be ]
The last argument passed to ssovf_parsekv() is an
unsigned char*, but it is accessed as an integer.
This can lead to an integer overflow.
Hence, make ensure the argument is accessed as a char
and for better error handling use strtol instead of atoi.
Bugzilla ID: 1512
Fixes: 3516327e00fd ("event/octeontx: add selftest to device arguments")
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com>
Tested-by: Ali Alnubani <alialnu@nvidia.com>
---
drivers/event/octeontx/ssovf_evdev.c | 16 +++++++++++++---
1 file changed, 13 insertions(+), 3 deletions(-)
diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c
index a16f24e088..c0129328ef 100644
--- a/drivers/event/octeontx/ssovf_evdev.c
+++ b/drivers/event/octeontx/ssovf_evdev.c
@@ -714,10 +714,20 @@ ssovf_close(struct rte_eventdev *dev)
}
static int
-ssovf_parsekv(const char *key __rte_unused, const char *value, void *opaque)
+ssovf_parsekv(const char *key, const char *value, void *opaque)
{
- int *flag = opaque;
- *flag = !!atoi(value);
+ uint8_t *flag = opaque;
+ uint64_t v;
+ char *end;
+
+ errno = 0;
+ v = strtoul(value, &end, 0);
+ if ((errno != 0) || (value == end) || *end != '\0' || v > 1) {
+ ssovf_log_err("invalid %s value %s", key, value);
+ return -EINVAL;
+ }
+
+ *flag = !!v;
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.538428610 +0800
+++ 0043-event-octeontx-fix-possible-integer-overflow.patch 2024-12-06 23:26:43.943044828 +0800
@@ -1 +1 @@
-From 3e86eee028c69b98144e2c62ec48091467e790be Mon Sep 17 00:00:00 2001
+From 97660f027b345b63678cf4b6fc0c5b3bdce6a19d Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3e86eee028c69b98144e2c62ec48091467e790be ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index 3a933b1db7..957fcab04e 100644
+index a16f24e088..c0129328ef 100644
@@ -27 +29 @@
-@@ -717,10 +717,20 @@ ssovf_close(struct rte_eventdev *dev)
+@@ -714,10 +714,20 @@ ssovf_close(struct rte_eventdev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'baseband/acc: fix ring memory allocation' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (42 preceding siblings ...)
2024-12-07 8:00 ` patch 'event/octeontx: fix possible integer overflow' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'crypto/openssl: fix potential string overflow' " Xueming Li
` (52 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Nicolas Chautru; +Cc: Xueming Li, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=91dc468e14825496f56e730bbf1e8e7ac2544109
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 91dc468e14825496f56e730bbf1e8e7ac2544109 Mon Sep 17 00:00:00 2001
From: Nicolas Chautru <nicolas.chautru@intel.com>
Date: Thu, 7 Nov 2024 16:32:38 -0800
Subject: [PATCH] baseband/acc: fix ring memory allocation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0c5709824b531e83b36ed91852cea98b1cb292e1 ]
Allowing ring memory allocation whose end address is aligned with 64 MB.
Previous logic was off by one.
Fixes: 060e76729302 ("baseband/acc100: add queue configuration")
Signed-off-by: Nicolas Chautru <nicolas.chautru@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
drivers/baseband/acc/acc_common.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/baseband/acc/acc_common.h b/drivers/baseband/acc/acc_common.h
index 6752c256d2..13f7ec40e6 100644
--- a/drivers/baseband/acc/acc_common.h
+++ b/drivers/baseband/acc/acc_common.h
@@ -787,7 +787,7 @@ alloc_sw_rings_min_mem(struct rte_bbdev *dev, struct acc_device *d,
sw_rings_base, ACC_SIZE_64MBYTE);
next_64mb_align_addr_iova = sw_rings_base_iova +
next_64mb_align_offset;
- sw_ring_iova_end_addr = sw_rings_base_iova + dev_sw_ring_size;
+ sw_ring_iova_end_addr = sw_rings_base_iova + dev_sw_ring_size - 1;
/* Check if the end of the sw ring memory block is before the
* start of next 64MB aligned mem address
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.567760310 +0800
+++ 0044-baseband-acc-fix-ring-memory-allocation.patch 2024-12-06 23:26:43.943044828 +0800
@@ -1 +1 @@
-From 0c5709824b531e83b36ed91852cea98b1cb292e1 Mon Sep 17 00:00:00 2001
+From 91dc468e14825496f56e730bbf1e8e7ac2544109 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0c5709824b531e83b36ed91852cea98b1cb292e1 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 4c60b7896b..bf218332be 100644
+index 6752c256d2..13f7ec40e6 100644
@@ -22 +24 @@
-@@ -795,7 +795,7 @@ alloc_sw_rings_min_mem(struct rte_bbdev *dev, struct acc_device *d,
+@@ -787,7 +787,7 @@ alloc_sw_rings_min_mem(struct rte_bbdev *dev, struct acc_device *d,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/openssl: fix potential string overflow' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (43 preceding siblings ...)
2024-12-07 8:00 ` patch 'baseband/acc: fix ring memory allocation' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/crypto: fix synchronous API calls' " Xueming Li
` (51 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5bda3f3b964cf700d81e203fa3a7fac459b26c60
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5bda3f3b964cf700d81e203fa3a7fac459b26c60 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 17 Oct 2024 09:07:53 -0700
Subject: [PATCH] crypto/openssl: fix potential string overflow
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c5819b0d96d1a24c25aa4324913fd2566eb19ae9 ]
The algorithm name is a string and should be copied with strlcpy()
rather than rte_memcpy(). This fixes a warning detected with
clang and ASAN.
Bugzilla ID: 1565
Fixes: 2b9c693f6ef5 ("crypto/openssl: support AES-CMAC operations")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/crypto/openssl/rte_openssl_pmd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c
index 7538ae2953..017e74e765 100644
--- a/drivers/crypto/openssl/rte_openssl_pmd.c
+++ b/drivers/crypto/openssl/rte_openssl_pmd.c
@@ -677,7 +677,7 @@ openssl_set_session_auth_parameters(struct openssl_session *sess,
else
return -EINVAL;
- rte_memcpy(algo_name, algo, strlen(algo) + 1);
+ strlcpy(algo_name, algo, sizeof(algo_name));
params[0] = OSSL_PARAM_construct_utf8_string(
OSSL_MAC_PARAM_CIPHER, algo_name, 0);
params[1] = OSSL_PARAM_construct_end();
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.597487009 +0800
+++ 0045-crypto-openssl-fix-potential-string-overflow.patch 2024-12-06 23:26:43.943044828 +0800
@@ -1 +1 @@
-From c5819b0d96d1a24c25aa4324913fd2566eb19ae9 Mon Sep 17 00:00:00 2001
+From 5bda3f3b964cf700d81e203fa3a7fac459b26c60 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c5819b0d96d1a24c25aa4324913fd2566eb19ae9 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 0616383921..b2442c7ebf 100644
+index 7538ae2953..017e74e765 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/crypto: fix synchronous API calls' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (44 preceding siblings ...)
2024-12-07 8:00 ` patch 'crypto/openssl: fix potential string overflow' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'crypto/qat: fix modexp/inv length' " Xueming Li
` (50 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Brian Dooley; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=11b2730b1dd2f587e485cc390535421ec2844d7a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 11b2730b1dd2f587e485cc390535421ec2844d7a Mon Sep 17 00:00:00 2001
From: Brian Dooley <brian.dooley@intel.com>
Date: Fri, 25 Oct 2024 15:22:51 +0100
Subject: [PATCH] test/crypto: fix synchronous API calls
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 251fdc592da5eddc4d84a95d0c151b0134504a32 ]
For the synchronous API the enqueue/dequeue burst functions are not
called. Skip these tests when calling the synchronous API.
Fixes: 4ad17a1c8fb3 ("test/crypto: fix enqueue/dequeue callback case")
Signed-off-by: Brian Dooley <brian.dooley@intel.com>
---
app/test/test_cryptodev.c | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c
index 6cd38aefae..7d9fe29c02 100644
--- a/app/test/test_cryptodev.c
+++ b/app/test/test_cryptodev.c
@@ -13625,6 +13625,10 @@ test_enq_callback_setup(void)
uint16_t qp_id = 0;
int j = 0;
+ /* Skip test if synchronous API is used */
+ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+ return TEST_SKIPPED;
+
/* Verify the crypto capabilities for which enqueue/dequeue is done. */
cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
cap_idx.algo.auth = RTE_CRYPTO_AUTH_NULL;
@@ -13746,6 +13750,10 @@ test_deq_callback_setup(void)
uint16_t qp_id = 0;
int j = 0;
+ /* Skip test if synchronous API is used */
+ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
+ return TEST_SKIPPED;
+
/* Verify the crypto capabilities for which enqueue/dequeue is done. */
cap_idx.type = RTE_CRYPTO_SYM_XFORM_AUTH;
cap_idx.algo.auth = RTE_CRYPTO_AUTH_NULL;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.630271209 +0800
+++ 0046-test-crypto-fix-synchronous-API-calls.patch 2024-12-06 23:26:43.953044828 +0800
@@ -1 +1 @@
-From 251fdc592da5eddc4d84a95d0c151b0134504a32 Mon Sep 17 00:00:00 2001
+From 11b2730b1dd2f587e485cc390535421ec2844d7a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 251fdc592da5eddc4d84a95d0c151b0134504a32 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -14,2 +16,2 @@
- app/test/test_cryptodev.c | 11 ++++++++++-
- 1 file changed, 10 insertions(+), 1 deletion(-)
+ app/test/test_cryptodev.c | 8 ++++++++
+ 1 file changed, 8 insertions(+)
@@ -18 +20 @@
-index 25eef342b0..c647baeee1 100644
+index 6cd38aefae..7d9fe29c02 100644
@@ -21,11 +23 @@
-@@ -2496,7 +2496,8 @@ test_queue_pair_descriptor_count(void)
- int qp_depth = 0;
- int i;
-
-- RTE_VERIFY(gbl_action_type != RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO);
-+ if (gbl_action_type == RTE_SECURITY_ACTION_TYPE_CPU_CRYPTO)
-+ return TEST_SKIPPED;
-
- /* Verify if the queue pair depth API is supported by driver */
- qp_depth = rte_cryptodev_qp_depth_used(ts_params->valid_devs[0], 0);
-@@ -15135,6 +15136,10 @@ test_enq_callback_setup(void)
+@@ -13625,6 +13625,10 @@ test_enq_callback_setup(void)
@@ -42 +34 @@
-@@ -15256,6 +15261,10 @@ test_deq_callback_setup(void)
+@@ -13746,6 +13750,10 @@ test_deq_callback_setup(void)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/qat: fix modexp/inv length' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (45 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/crypto: fix synchronous API calls' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'crypto/qat: fix ECDSA session handling' " Xueming Li
` (49 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Arkadiusz Kusztal; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=aaffe2a33859e8a78c0bb741fc16846d802f8050
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From aaffe2a33859e8a78c0bb741fc16846d802f8050 Mon Sep 17 00:00:00 2001
From: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Date: Thu, 31 Oct 2024 19:19:17 +0000
Subject: [PATCH] crypto/qat: fix modexp/inv length
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 5b2fe7ef3c1b731f086d9454262a530a082b0441 ]
This commit fixes an unset length in modular algorithms
in QAT asymmetric crypto PMD.
Fixes: 3b78aa7b2317 ("crypto/qat: refactor asymmetric crypto functions")
Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
---
drivers/crypto/qat/qat_asym.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 5d240a3de1..38c23b397b 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -270,6 +270,7 @@ modexp_collect(struct rte_crypto_asym_op *asym_op,
rte_memcpy(modexp_result,
cookie->output_array[0] + alg_bytesize
- n.length, n.length);
+ asym_op->modex.result.length = alg_bytesize;
HEXDUMP("ModExp result", cookie->output_array[0],
alg_bytesize);
return RTE_CRYPTO_OP_STATUS_SUCCESS;
@@ -331,6 +332,7 @@ modinv_collect(struct rte_crypto_asym_op *asym_op,
- n.length),
cookie->output_array[0] + alg_bytesize
- n.length, n.length);
+ asym_op->modinv.result.length = alg_bytesize;
HEXDUMP("ModInv result", cookie->output_array[0],
alg_bytesize);
return RTE_CRYPTO_OP_STATUS_SUCCESS;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.669805008 +0800
+++ 0047-crypto-qat-fix-modexp-inv-length.patch 2024-12-06 23:26:43.953044828 +0800
@@ -1 +1 @@
-From 5b2fe7ef3c1b731f086d9454262a530a082b0441 Mon Sep 17 00:00:00 2001
+From aaffe2a33859e8a78c0bb741fc16846d802f8050 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 5b2fe7ef3c1b731f086d9454262a530a082b0441 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 9e97582e22..7bb2f6c1e0 100644
+index 5d240a3de1..38c23b397b 100644
@@ -21 +23 @@
-@@ -277,6 +277,7 @@ modexp_collect(struct rte_crypto_asym_op *asym_op,
+@@ -270,6 +270,7 @@ modexp_collect(struct rte_crypto_asym_op *asym_op,
@@ -29 +31 @@
-@@ -338,6 +339,7 @@ modinv_collect(struct rte_crypto_asym_op *asym_op,
+@@ -331,6 +332,7 @@ modinv_collect(struct rte_crypto_asym_op *asym_op,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'crypto/qat: fix ECDSA session handling' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (46 preceding siblings ...)
2024-12-07 8:00 ` patch 'crypto/qat: fix modexp/inv length' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/igc: fix Rx buffers when timestamping enabled' " Xueming Li
` (48 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Arkadiusz Kusztal; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=084a75515686e31358c6052839456d7b6bf2538e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 084a75515686e31358c6052839456d7b6bf2538e Mon Sep 17 00:00:00 2001
From: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
Date: Wed, 6 Nov 2024 15:14:21 +0000
Subject: [PATCH] crypto/qat: fix ECDSA session handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 20e633b0ca15539b682539a665e8d3dc0dc2c899 ]
Fixed a problem with setting the key in the session in the ECDSA
algorithm. Since the key is being initialized in the session for EC,
it should be reflected in the PMD session initialization function.
Fixes: badc0c6f6d6a ("cryptodev: set private and public keys in EC session")
Signed-off-by: Arkadiusz Kusztal <arkadiuszx.kusztal@intel.com>
---
drivers/crypto/qat/qat_asym.c | 41 +++++++++++++++++++++++++++++++++--
1 file changed, 39 insertions(+), 2 deletions(-)
diff --git a/drivers/crypto/qat/qat_asym.c b/drivers/crypto/qat/qat_asym.c
index 38c23b397b..4bc087987f 100644
--- a/drivers/crypto/qat/qat_asym.c
+++ b/drivers/crypto/qat/qat_asym.c
@@ -1337,11 +1337,48 @@ err:
return ret;
}
-static void
+static int
session_set_ec(struct qat_asym_session *qat_session,
struct rte_crypto_asym_xform *xform)
{
+ uint8_t *pkey = xform->ec.pkey.data;
+ uint8_t *q_x = xform->ec.q.x.data;
+ uint8_t *q_y = xform->ec.q.y.data;
+
+ qat_session->xform.ec.pkey.data =
+ rte_malloc(NULL, xform->ec.pkey.length, 0);
+ if (qat_session->xform.ec.pkey.length &&
+ qat_session->xform.ec.pkey.data == NULL)
+ return -ENOMEM;
+ qat_session->xform.ec.q.x.data = rte_malloc(NULL,
+ xform->ec.q.x.length, 0);
+ if (qat_session->xform.ec.q.x.length &&
+ qat_session->xform.ec.q.x.data == NULL) {
+ rte_free(qat_session->xform.ec.pkey.data);
+ return -ENOMEM;
+ }
+ qat_session->xform.ec.q.y.data = rte_malloc(NULL,
+ xform->ec.q.y.length, 0);
+ if (qat_session->xform.ec.q.y.length &&
+ qat_session->xform.ec.q.y.data == NULL) {
+ rte_free(qat_session->xform.ec.pkey.data);
+ rte_free(qat_session->xform.ec.q.x.data);
+ return -ENOMEM;
+ }
+
+ memcpy(qat_session->xform.ec.pkey.data, pkey,
+ xform->ec.pkey.length);
+ qat_session->xform.ec.pkey.length = xform->ec.pkey.length;
+ memcpy(qat_session->xform.ec.q.x.data, q_x,
+ xform->ec.q.x.length);
+ qat_session->xform.ec.q.x.length = xform->ec.q.x.length;
+ memcpy(qat_session->xform.ec.q.y.data, q_y,
+ xform->ec.q.y.length);
+ qat_session->xform.ec.q.y.length = xform->ec.q.y.length;
qat_session->xform.ec.curve_id = xform->ec.curve_id;
+
+ return 0;
+
}
int
@@ -1375,7 +1412,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
case RTE_CRYPTO_ASYM_XFORM_ECDSA:
case RTE_CRYPTO_ASYM_XFORM_ECPM:
case RTE_CRYPTO_ASYM_XFORM_ECDH:
- session_set_ec(qat_session, xform);
+ ret = session_set_ec(qat_session, xform);
break;
case RTE_CRYPTO_ASYM_XFORM_SM2:
break;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.699435508 +0800
+++ 0048-crypto-qat-fix-ECDSA-session-handling.patch 2024-12-06 23:26:43.953044828 +0800
@@ -1 +1 @@
-From 20e633b0ca15539b682539a665e8d3dc0dc2c899 Mon Sep 17 00:00:00 2001
+From 084a75515686e31358c6052839456d7b6bf2538e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 20e633b0ca15539b682539a665e8d3dc0dc2c899 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -19 +21 @@
-index 7bb2f6c1e0..f5b56b2f71 100644
+index 38c23b397b..4bc087987f 100644
@@ -22 +24 @@
-@@ -1348,11 +1348,48 @@ err:
+@@ -1337,11 +1337,48 @@ err:
@@ -72 +74 @@
-@@ -1388,7 +1425,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
+@@ -1375,7 +1412,7 @@ qat_asym_session_configure(struct rte_cryptodev *dev __rte_unused,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/igc: fix Rx buffers when timestamping enabled' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (47 preceding siblings ...)
2024-12-07 8:00 ` patch 'crypto/qat: fix ECDSA session handling' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/cpfl: fix forwarding to physical port' " Xueming Li
` (47 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Martin Weiser; +Cc: Xueming Li, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=57f022e772fe4d6249a18b05d6bf2810c1f5a410
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 57f022e772fe4d6249a18b05d6bf2810c1f5a410 Mon Sep 17 00:00:00 2001
From: Martin Weiser <martin.weiser@allegro-packets.com>
Date: Fri, 1 Nov 2024 14:57:26 +0100
Subject: [PATCH] net/igc: fix Rx buffers when timestamping enabled
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4e08d335554ec6d975ded8a7badf81e0edb39234 ]
When hardware-timestamping is enabled (RTE_ETH_RX_OFFLOAD_TIMESTAMP),
the length of the prepended hardware timestamp was not subtracted from
the data length so that received packets were 16 bytes longer than
expected.
In scatter-gather mode only the first mbuf has a timestamp but the
data offset of the follow-up mbufs was not adjusted accordingly.
This caused 16 bytes of packet data to be missing between
the segments.
Fixes: 4f6fbbf6f17d ("net/igc: support IEEE 1588 PTP")
Signed-off-by: Martin Weiser <martin.weiser@allegro-packets.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/igc/igc_txrx.c | 26 ++++++++++++++++++++++++++
1 file changed, 26 insertions(+)
diff --git a/drivers/net/igc/igc_txrx.c b/drivers/net/igc/igc_txrx.c
index 5c60e3e997..a54c4681f7 100644
--- a/drivers/net/igc/igc_txrx.c
+++ b/drivers/net/igc/igc_txrx.c
@@ -347,6 +347,13 @@ igc_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)
rxm->data_off = RTE_PKTMBUF_HEADROOM;
data_len = rte_le_to_cpu_16(rxd.wb.upper.length) - rxq->crc_len;
+ /*
+ * When the RTE_ETH_RX_OFFLOAD_TIMESTAMP offload is enabled the
+ * length in the descriptor still accounts for the timestamp so
+ * it must be subtracted.
+ */
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP)
+ data_len -= IGC_TS_HDR_LEN;
rxm->data_len = data_len;
rxm->pkt_len = data_len;
rxm->nb_segs = 1;
@@ -509,6 +516,24 @@ next_desc:
*/
rxm->data_off = RTE_PKTMBUF_HEADROOM;
data_len = rte_le_to_cpu_16(rxd.wb.upper.length);
+ if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) {
+ /*
+ * When the RTE_ETH_RX_OFFLOAD_TIMESTAMP offload is enabled
+ * the pkt_addr of all software ring entries is moved forward
+ * by IGC_TS_HDR_LEN (see igc_alloc_rx_queue_mbufs()) so that
+ * when the hardware writes the packet with a prepended
+ * timestamp the actual packet data still starts at the
+ * normal data offset. The length in the descriptor still
+ * accounts for the timestamp so it needs to be subtracted.
+ * Follow-up mbufs do not have the timestamp so the data
+ * offset must be adjusted to point to the start of the packet
+ * data.
+ */
+ if (first_seg == NULL)
+ data_len -= IGC_TS_HDR_LEN;
+ else
+ rxm->data_off -= IGC_TS_HDR_LEN;
+ }
rxm->data_len = data_len;
/*
@@ -557,6 +582,7 @@ next_desc:
last_seg->data_len = last_seg->data_len -
(RTE_ETHER_CRC_LEN - data_len);
last_seg->next = NULL;
+ rxm = last_seg;
} else {
rxm->data_len = (uint16_t)
(data_len - RTE_ETHER_CRC_LEN);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.729222608 +0800
+++ 0049-net-igc-fix-Rx-buffers-when-timestamping-enabled.patch 2024-12-06 23:26:43.953044828 +0800
@@ -1 +1 @@
-From 4e08d335554ec6d975ded8a7badf81e0edb39234 Mon Sep 17 00:00:00 2001
+From 57f022e772fe4d6249a18b05d6bf2810c1f5a410 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4e08d335554ec6d975ded8a7badf81e0edb39234 ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -26 +28 @@
-index d0cee1b016..fabab5b1a3 100644
+index 5c60e3e997..a54c4681f7 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cpfl: fix forwarding to physical port' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (48 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/igc: fix Rx buffers when timestamping enabled' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix WC TCAM multi-slice delete' " Xueming Li
` (46 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Praveen Shetty; +Cc: Xueming Li, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7230a4907ce03886a03a36a76a7e105da1f1af14
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7230a4907ce03886a03a36a76a7e105da1f1af14 Mon Sep 17 00:00:00 2001
From: Praveen Shetty <praveen.shetty@intel.com>
Date: Fri, 8 Nov 2024 11:09:23 +0000
Subject: [PATCH] net/cpfl: fix forwarding to physical port
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b0e6aff62efa4bcc23f64b80b91709bef73e6d79 ]
CPFL PMD should be able to support below traffic forwarding capabilities
based on the rte flow action types.
1. Forwarding the traffic to the local CPFL vports using
port_representor action.
2. Forwarding the traffic to the physical IO ports using
represented_port action.
3. Forwarding the traffic to an IDPF VF using represented_port action.
The 2nd use case, forwarding the traffic to IO ports using
represented_port action, is not working due to the additional check
added in the previous patch (861261957684 ("net/cpfl: add checks for
flow action types"))
This patch removes the incorrect check to fix the issue.
Fixes: 861261957684 ("net/cpfl: add checks for flow action types")
Signed-off-by: Praveen Shetty <praveen.shetty@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/cpfl/cpfl_flow_engine_fxp.c | 5 -----
1 file changed, 5 deletions(-)
diff --git a/drivers/net/cpfl/cpfl_flow_engine_fxp.c b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
index 9101a0e506..56a9a85345 100644
--- a/drivers/net/cpfl/cpfl_flow_engine_fxp.c
+++ b/drivers/net/cpfl/cpfl_flow_engine_fxp.c
@@ -298,11 +298,6 @@ cpfl_fxp_parse_action(struct cpfl_itf *itf,
PMD_DRV_LOG(ERR, "Cannot use port_representor action for the represented_port");
goto err;
}
- if (action_type == RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT &&
- dst_itf->type == CPFL_ITF_TYPE_VPORT) {
- PMD_DRV_LOG(ERR, "Cannot use represented_port action for the local vport");
- goto err;
- }
if (is_vsi)
dev_id = cpfl_get_vsi_id(dst_itf);
else
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.757898007 +0800
+++ 0050-net-cpfl-fix-forwarding-to-physical-port.patch 2024-12-06 23:26:43.953044828 +0800
@@ -1 +1 @@
-From b0e6aff62efa4bcc23f64b80b91709bef73e6d79 Mon Sep 17 00:00:00 2001
+From 7230a4907ce03886a03a36a76a7e105da1f1af14 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b0e6aff62efa4bcc23f64b80b91709bef73e6d79 ]
@@ -25 +27,0 @@
-Cc: stable@dpdk.org
@@ -34 +36 @@
-index 0101c30911..689ed82f18 100644
+index 9101a0e506..56a9a85345 100644
@@ -37 +39 @@
-@@ -301,11 +301,6 @@ cpfl_fxp_parse_action(struct cpfl_itf *itf,
+@@ -298,11 +298,6 @@ cpfl_fxp_parse_action(struct cpfl_itf *itf,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt/tf_core: fix WC TCAM multi-slice delete' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (49 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/cpfl: fix forwarding to physical port' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix TCAM manager data corruption' " Xueming Li
` (45 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Shahaji Bhosle
Cc: Xueming Li, Sriharsha Basavapatna, Kishore Padmanabha,
Randy Schacher, Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e5ebe71958e2c9f1e93fe4211f2e02aea5391363
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e5ebe71958e2c9f1e93fe4211f2e02aea5391363 Mon Sep 17 00:00:00 2001
From: Shahaji Bhosle <sbhosle@broadcom.com>
Date: Thu, 7 Nov 2024 19:22:08 +0530
Subject: [PATCH] net/bnxt/tf_core: fix WC TCAM multi-slice delete
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 78dcdb821cb81f4abddfd4abc0192e238d4bfcec ]
FW tries to update the HWRM request data in the
delete case to update the mode bit and also
update invalid profile id. This update only
happens when the data is send over DMA. HWRM
requests are read only buffers and cannot be
updated. So driver now will always send WC
tcam set message over DMA channel.
Update tunnel alloc apis to provide error message.
Fixes: ca5e61bd562d ("net/bnxt: support EM and TCAM lookup with table scope")
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/tf_core/tf_msg.c | 28 +++++++++++-----------
drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c | 16 +++++++++++--
2 files changed, 28 insertions(+), 16 deletions(-)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 1c66c7e01a..4aa90f6b07 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -1612,20 +1612,20 @@ tf_msg_tcam_entry_set(struct tf *tfp,
req.result_size = parms->result_size;
data_size = 2 * req.key_size + req.result_size;
- if (data_size <= TF_PCI_BUF_SIZE_MAX) {
- /* use pci buffer */
- data = &req.dev_data[0];
- } else {
- /* use dma buffer */
- req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
- rc = tf_msg_alloc_dma_buf(&buf, data_size);
- if (rc)
- goto cleanup;
- data = buf.va_addr;
- tfp_memcpy(&req.dev_data[0],
- &buf.pa_addr,
- sizeof(buf.pa_addr));
- }
+ /*
+ * Always use dma buffer, as the delete multi slice
+ * tcam entries not support with HWRM request buffer
+ * only DMA'ed buffer can update the mode bits for
+ * the delete to work
+ */
+ req.flags |= HWRM_TF_TCAM_SET_INPUT_FLAGS_DMA;
+ rc = tf_msg_alloc_dma_buf(&buf, data_size);
+ if (rc)
+ goto cleanup;
+ data = buf.va_addr;
+ tfp_memcpy(&req.dev_data[0],
+ &buf.pa_addr,
+ sizeof(buf.pa_addr));
tfp_memcpy(&data[0], parms->key, parms->key_size);
tfp_memcpy(&data[parms->key_size], parms->mask, parms->key_size);
diff --git a/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c b/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c
index 239191e14e..b0d9d8d3d9 100644
--- a/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c
+++ b/drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c
@@ -32,9 +32,16 @@ bnxt_tunnel_dst_port_alloc(struct bnxt *bp,
uint16_t port,
uint8_t type)
{
- return bnxt_hwrm_tunnel_dst_port_alloc(bp,
+ int rc = 0;
+ rc = bnxt_hwrm_tunnel_dst_port_alloc(bp,
port,
type);
+ if (rc) {
+ PMD_DRV_LOG(ERR, "Tunnel type:%d alloc failed for port:%d error:%s\n",
+ type, port, (rc == HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ALLOCATED) ?
+ "already allocated" : "no resource");
+ }
+ return rc;
}
int
@@ -589,7 +596,12 @@ bnxt_pmd_global_tunnel_set(uint16_t port_id, uint8_t type,
}
rc = bnxt_hwrm_tunnel_dst_port_alloc(bp, udp_port, hwtype);
- if (!rc) {
+ if (rc) {
+ if (rc == HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ALLOCATED)
+ PMD_DRV_LOG(ERR, "Tunnel already allocated, type:%d port:%d\n", hwtype, udp_port);
+ else
+ PMD_DRV_LOG(ERR, "Tunnel allocation failed, type:%d port:%d\n", hwtype, udp_port);
+ } else {
ulp_global_tunnel_db[type].ref_cnt++;
ulp_global_tunnel_db[type].dport = udp_port;
bnxt_pmd_global_reg_data_to_hndl(port_id, bp->ecpri_upar_in_use,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.787438007 +0800
+++ 0051-net-bnxt-tf_core-fix-WC-TCAM-multi-slice-delete.patch 2024-12-06 23:26:43.953044828 +0800
@@ -1 +1 @@
-From 78dcdb821cb81f4abddfd4abc0192e238d4bfcec Mon Sep 17 00:00:00 2001
+From e5ebe71958e2c9f1e93fe4211f2e02aea5391363 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 78dcdb821cb81f4abddfd4abc0192e238d4bfcec ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -26,2 +28,2 @@
- drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c | 21 ++++++++++++++--
- 2 files changed, 33 insertions(+), 16 deletions(-)
+ drivers/net/bnxt/tf_ulp/bnxt_tf_pmd_shim.c | 16 +++++++++++--
+ 2 files changed, 28 insertions(+), 16 deletions(-)
@@ -69 +71 @@
-index 75a0b77ac2..fc8d0098f9 100644
+index 239191e14e..b0d9d8d3d9 100644
@@ -72 +74 @@
-@@ -32,9 +32,18 @@ bnxt_tunnel_dst_port_alloc(struct bnxt *bp,
+@@ -32,9 +32,16 @@ bnxt_tunnel_dst_port_alloc(struct bnxt *bp,
@@ -82,5 +84,3 @@
-+ PMD_DRV_LOG_LINE(ERR, "Tunnel type:%d alloc failed for port:%d error:%s",
-+ type, port,
-+ (rc ==
-+ HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ALLOCATED) ?
-+ "already allocated" : "no resource");
++ PMD_DRV_LOG(ERR, "Tunnel type:%d alloc failed for port:%d error:%s\n",
++ type, port, (rc == HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ALLOCATED) ?
++ "already allocated" : "no resource");
@@ -92 +92 @@
-@@ -589,7 +598,15 @@ bnxt_pmd_global_tunnel_set(uint16_t port_id, uint8_t type,
+@@ -589,7 +596,12 @@ bnxt_pmd_global_tunnel_set(uint16_t port_id, uint8_t type,
@@ -99,3 +99 @@
-+ PMD_DRV_LOG_LINE(ERR,
-+ "Tunnel already allocated, type:%d port:%d",
-+ hwtype, udp_port);
++ PMD_DRV_LOG(ERR, "Tunnel already allocated, type:%d port:%d\n", hwtype, udp_port);
@@ -103,2 +101 @@
-+ PMD_DRV_LOG_LINE(ERR, "Tunnel allocation failed, type:%d port:%d",
-+ hwtype, udp_port);
++ PMD_DRV_LOG(ERR, "Tunnel allocation failed, type:%d port:%d\n", hwtype, udp_port);
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt/tf_core: fix TCAM manager data corruption' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (50 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix WC TCAM multi-slice delete' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix Thor TF EM key size check' " Xueming Li
` (44 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Shahaji Bhosle
Cc: Xueming Li, Sriharsha Basavapatna, Farah Smith,
Kishore Padmanabha, Shuanglin Wang, Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=f912142dc89ac8f14242b40e523ee192a0c46d55
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From f912142dc89ac8f14242b40e523ee192a0c46d55 Mon Sep 17 00:00:00 2001
From: Shahaji Bhosle <sbhosle@broadcom.com>
Date: Thu, 7 Nov 2024 19:22:09 +0530
Subject: [PATCH] net/bnxt/tf_core: fix TCAM manager data corruption
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0bee506e9ec6bca60111b63e631c84080b14aec2 ]
Max entries per session were not getting initialized
to 0, when the sessions were closed.
Reset max entries counter session when the session is initialized
Fixes: 97435d7906d7 ("net/bnxt: update Truflow core")
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shuanglin Wang <shuanglin.wang@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/tf_core/cfa_tcam_mgr.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c b/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c
index f26d93e7a9..9df2d2b937 100644
--- a/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c
+++ b/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c
@@ -909,6 +909,7 @@ cfa_tcam_mgr_init(int sess_idx, enum cfa_tcam_mgr_device_type type,
/* Now calculate the max entries per table and global max entries based
* on the updated table limits.
*/
+ cfa_tcam_mgr_max_entries[sess_idx] = 0;
for (dir = 0; dir < ARRAY_SIZE(cfa_tcam_mgr_tables[sess_idx]); dir++)
for (tbl_type = 0;
tbl_type < ARRAY_SIZE(cfa_tcam_mgr_tables[sess_idx][dir]);
@@ -958,8 +959,8 @@ cfa_tcam_mgr_init(int sess_idx, enum cfa_tcam_mgr_device_type type,
if (parms != NULL)
parms->max_entries = cfa_tcam_mgr_max_entries[sess_idx];
- CFA_TCAM_MGR_LOG(INFO, "Global TCAM table initialized for sess_idx %d.\n",
- sess_idx);
+ CFA_TCAM_MGR_LOG(DEBUG, "Global TCAM table initialized for sess_idx %d max entries %d.\n",
+ sess_idx, cfa_tcam_mgr_max_entries[sess_idx]);
return 0;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.821620507 +0800
+++ 0052-net-bnxt-tf_core-fix-TCAM-manager-data-corruption.patch 2024-12-06 23:26:43.963044828 +0800
@@ -1 +1 @@
-From 0bee506e9ec6bca60111b63e631c84080b14aec2 Mon Sep 17 00:00:00 2001
+From f912142dc89ac8f14242b40e523ee192a0c46d55 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0bee506e9ec6bca60111b63e631c84080b14aec2 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt/tf_core: fix Thor TF EM key size check' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (51 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix TCAM manager data corruption' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix slice count in case of HA entry move' " Xueming Li
` (43 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Farah Smith
Cc: Xueming Li, Sriharsha Basavapatna, Kishore Padmanabha,
Shahaji Bhosle, Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=3a09e053c9262d2332217835ca2f8deb5c58df9c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 3a09e053c9262d2332217835ca2f8deb5c58df9c Mon Sep 17 00:00:00 2001
From: Farah Smith <farah.smith@broadcom.com>
Date: Thu, 7 Nov 2024 19:22:11 +0530
Subject: [PATCH] net/bnxt/tf_core: fix Thor TF EM key size check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 912abed4250c792214886880fa0b93b7712fba21 ]
The maximum EM key size is 640 bits for Thor. But the lookup record
+ the key size is 679 bits. This value must be rounded up to a 128 bit
aligned number. So the size check should be 96 bytes rather than 80.
This fix allows keys > 601 bits to be successfully inserted.
Fixes: 539931eab3a5 ("net/bnxt: support EM with FKB")
Signed-off-by: Farah Smith <farah.smith@broadcom.com>
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Farah Smith <farah.smith@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/tf_core/tf_msg.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c
index 4aa90f6b07..46e9d4187a 100644
--- a/drivers/net/bnxt/tf_core/tf_msg.c
+++ b/drivers/net/bnxt/tf_core/tf_msg.c
@@ -25,7 +25,7 @@
*/
#define TF_MSG_SET_GLOBAL_CFG_DATA_SIZE 16
#define TF_MSG_EM_INSERT_KEY_SIZE 64
-#define TF_MSG_EM_INSERT_RECORD_SIZE 80
+#define TF_MSG_EM_INSERT_RECORD_SIZE 96
#define TF_MSG_TBL_TYPE_SET_DATA_SIZE 88
/* Compile check - Catch any msg changes that we depend on, like the
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.852584306 +0800
+++ 0053-net-bnxt-tf_core-fix-Thor-TF-EM-key-size-check.patch 2024-12-06 23:26:43.963044828 +0800
@@ -1 +1 @@
-From 912abed4250c792214886880fa0b93b7712fba21 Mon Sep 17 00:00:00 2001
+From 3a09e053c9262d2332217835ca2f8deb5c58df9c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 912abed4250c792214886880fa0b93b7712fba21 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index 08e9783d52..dd5ea1c80e 100644
+index 4aa90f6b07..46e9d4187a 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt/tf_core: fix slice count in case of HA entry move' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (52 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix Thor TF EM key size check' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt: fix reading SFF-8436 SFP EEPROMs' " Xueming Li
` (42 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Sangtani Parag Satishbhai
Cc: Xueming Li, Sriharsha Basavapatna, Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c1606948da25d750aca4e3879c041fefa71286ef
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c1606948da25d750aca4e3879c041fefa71286ef Mon Sep 17 00:00:00 2001
From: Sangtani Parag Satishbhai <parag-satishbhai.sangtani@broadcom.com>
Date: Thu, 7 Nov 2024 19:22:14 +0530
Subject: [PATCH] net/bnxt/tf_core: fix slice count in case of HA entry move
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1190f2f8d5abf82c843ad071ad4c7d0aea202cce ]
When entries are moved during HA, a shared move function transfers
TCAM entries by using get/set message APIs, and the slice number of the
entry is required to accomplish the movement. The slice number is
calculated as the product of row_slice and entry size. Before calling
get/set message APIs, the source entry size should be updated with the
destination entry size; otherwise, it might corrupt the slice number field,
which may result in writing an incorrect entry. A fix is made which now
copies the entry size from the source to the destination before calling
get/set message APIs, ensuring the correct slice number is modified.
Fixes: 97435d7906d7 ("net/bnxt: update Truflow core")
Signed-off-by: Sangtani Parag Satishbhai <parag-satishbhai.sangtani@broadcom.com>
Reviewed-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
.mailmap | 1 +
drivers/net/bnxt/tf_core/cfa_tcam_mgr.c | 6 +++++-
2 files changed, 6 insertions(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 879d0fd81c..67ff5b21c6 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1264,6 +1264,7 @@ Sampath Peechu <speechu@cisco.com>
Samuel Gauthier <samuel.gauthier@6wind.com>
Sandilya Bhagi <sbhagi@solarflare.com>
Sangjin Han <sangjin@eecs.berkeley.edu>
+Sangtani Parag Satishbhai <parag-satishbhai.sangtani@broadcom.com>
Sankar Chokkalingam <sankarx.chokkalingam@intel.com>
Santoshkumar Karanappa Rastapur <santosh.rastapur@broadcom.com>
Santosh Shukla <santosh.shukla@caviumnetworks.com> <sshukla@mvista.com>
diff --git a/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c b/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c
index 9df2d2b937..130985f92a 100644
--- a/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c
+++ b/drivers/net/bnxt/tf_core/cfa_tcam_mgr.c
@@ -1678,6 +1678,11 @@ cfa_tcam_mgr_shared_entry_move(int sess_idx, struct cfa_tcam_mgr_context *contex
uint8_t key[CFA_TCAM_MGR_MAX_KEY_SIZE];
uint8_t mask[CFA_TCAM_MGR_MAX_KEY_SIZE];
uint8_t result[CFA_TCAM_MGR_MAX_KEY_SIZE];
+ /*
+ * Copy entry size before moving else if
+ * slice number is non zero and entry size is zero it will cause issues
+ */
+ dst_row->entry_size = src_row->entry_size;
int rc;
@@ -1752,7 +1757,6 @@ cfa_tcam_mgr_shared_entry_move(int sess_idx, struct cfa_tcam_mgr_context *contex
ROW_ENTRY_SET(dst_row, dst_row_slice);
dst_row->entries[dst_row_slice] = entry_id;
- dst_row->entry_size = src_row->entry_size;
dst_row->priority = src_row->priority;
ROW_ENTRY_CLEAR(src_row, entry->slice);
entry->row = dst_row_index;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.882001506 +0800
+++ 0054-net-bnxt-tf_core-fix-slice-count-in-case-of-HA-entry.patch 2024-12-06 23:26:43.963044828 +0800
@@ -1 +1 @@
-From 1190f2f8d5abf82c843ad071ad4c7d0aea202cce Mon Sep 17 00:00:00 2001
+From c1606948da25d750aca4e3879c041fefa71286ef Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1190f2f8d5abf82c843ad071ad4c7d0aea202cce ]
@@ -17 +19,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 4dbb8cf8a7..f293f89167 100644
+index 879d0fd81c..67ff5b21c6 100644
@@ -31 +33 @@
-@@ -1326,6 +1326,7 @@ Sampath Peechu <speechu@cisco.com>
+@@ -1264,6 +1264,7 @@ Sampath Peechu <speechu@cisco.com>
@@ -40 +42 @@
-index 349f52caba..33b1e4121e 100644
+index 9df2d2b937..130985f92a 100644
@@ -43 +45 @@
-@@ -1717,6 +1717,11 @@ cfa_tcam_mgr_shared_entry_move(int sess_idx, struct cfa_tcam_mgr_context *contex
+@@ -1678,6 +1678,11 @@ cfa_tcam_mgr_shared_entry_move(int sess_idx, struct cfa_tcam_mgr_context *contex
@@ -55 +57 @@
-@@ -1791,7 +1796,6 @@ cfa_tcam_mgr_shared_entry_move(int sess_idx, struct cfa_tcam_mgr_context *contex
+@@ -1752,7 +1757,6 @@ cfa_tcam_mgr_shared_entry_move(int sess_idx, struct cfa_tcam_mgr_context *contex
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt: fix reading SFF-8436 SFP EEPROMs' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (53 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix slice count in case of HA entry move' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt: fix TCP and UDP checksum flags' " Xueming Li
` (41 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Peter Morrow; +Cc: Xueming Li, Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=09537001bacc6acc7586e85df022e49dc172d358
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 09537001bacc6acc7586e85df022e49dc172d358 Mon Sep 17 00:00:00 2001
From: Peter Morrow <peter@graphiant.com>
Date: Mon, 12 Aug 2024 11:34:05 +0100
Subject: [PATCH] net/bnxt: fix reading SFF-8436 SFP EEPROMs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 7b8400464f14637ed2669dbf732c256bf2447de6 ]
If a SFP which supports SFF-8436 is present then
currently the DDM information present in the eeprom
is not read. Furthermore bnxt_get_module_eeprom()
will return -EINVAL for these eeproms since the
length of these eeproms is 512 bytes but we are
only ever selecting 2 pages (256 bytes) to read.
Fixes: 6253a23491a4 ("net/bnxt: dump SFP module info")
Signed-off-by: Peter Morrow <peter@graphiant.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
.mailmap | 1 +
drivers/net/bnxt/bnxt_ethdev.c | 1 -
2 files changed, 1 insertion(+), 1 deletion(-)
diff --git a/.mailmap b/.mailmap
index 67ff5b21c6..ff5b0821ba 100644
--- a/.mailmap
+++ b/.mailmap
@@ -1123,6 +1123,7 @@ Peng Yu <penyu@amazon.com>
Peng Zhang <peng.zhang@corigine.com> <peng1x.zhang@intel.com>
Pengzhen Liu <liupengzhen3@huawei.com>
Peter Mccarthy <peter.mccarthy@intel.com>
+Peter Morrow <peter@graphiant.com>
Peter Nilsson <peter.j.nilsson@ericsson.com>
Peter Spreadborough <peter.spreadborough@broadcom.com>
Petr Houska <t-pehous@microsoft.com>
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index 0fc561d258..988895a065 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4006,7 +4006,6 @@ static int bnxt_get_module_eeprom(struct rte_eth_dev *dev,
switch (module_info[0]) {
case SFF_MODULE_ID_SFP:
- module_info[SFF_DIAG_SUPPORT_OFFSET] = 0;
if (module_info[SFF_DIAG_SUPPORT_OFFSET]) {
pg_addr[2] = I2C_DEV_ADDR_A2;
pg_addr[3] = I2C_DEV_ADDR_A2;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.917753406 +0800
+++ 0055-net-bnxt-fix-reading-SFF-8436-SFP-EEPROMs.patch 2024-12-06 23:26:43.963044828 +0800
@@ -1 +1 @@
-From 7b8400464f14637ed2669dbf732c256bf2447de6 Mon Sep 17 00:00:00 2001
+From 09537001bacc6acc7586e85df022e49dc172d358 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 7b8400464f14637ed2669dbf732c256bf2447de6 ]
@@ -14 +16,0 @@
-Cc: stable@dpdk.org
@@ -24 +26 @@
-index f293f89167..5a8ec89d47 100644
+index 67ff5b21c6..ff5b0821ba 100644
@@ -27 +29 @@
-@@ -1179,6 +1179,7 @@ Peng Yu <penyu@amazon.com>
+@@ -1123,6 +1123,7 @@ Peng Yu <penyu@amazon.com>
@@ -36 +38 @@
-index 2f5c055086..5edb162430 100644
+index 0fc561d258..988895a065 100644
@@ -39 +41 @@
-@@ -4222,7 +4222,6 @@ static int bnxt_get_module_eeprom(struct rte_eth_dev *dev,
+@@ -4006,7 +4006,6 @@ static int bnxt_get_module_eeprom(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt: fix TCP and UDP checksum flags' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (54 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt: fix reading SFF-8436 SFP EEPROMs' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt: fix bad action offset in Tx BD' " Xueming Li
` (40 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Ajit Khaparde; +Cc: Xueming Li, Kalesh AP, Damodharam Ammepalli, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e8a76c3869c12908664c09e47a5963c5af24b7df
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e8a76c3869c12908664c09e47a5963c5af24b7df Mon Sep 17 00:00:00 2001
From: Ajit Khaparde <ajit.khaparde@broadcom.com>
Date: Thu, 13 Jun 2024 07:20:28 -0700
Subject: [PATCH] net/bnxt: fix TCP and UDP checksum flags
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4c0451197e5a88531c30398b58b7e5601be90080 ]
Set TCP and UDP checksum flags explicitly for LSO capable packets.
In some older chip variants, this will enable the hardware compute
the checksum correctly for tunnel and non-tunnel packets.
Fixes: 1d76c878b21d ("net/bnxt: support updating IPID")
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com>
---
drivers/net/bnxt/bnxt_txr.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index cef14427a8..67e8a9990b 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -313,7 +313,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
/* TSO */
txbd1->lflags |= TX_BD_LONG_LFLAGS_LSO |
- TX_BD_LONG_LFLAGS_T_IPID;
+ TX_BD_LONG_LFLAGS_T_IPID |
+ TX_BD_LONG_LFLAGS_TCP_UDP_CHKSUM |
+ TX_BD_LONG_LFLAGS_T_IP_CHKSUM;
hdr_size = tx_pkt->l2_len + tx_pkt->l3_len +
tx_pkt->l4_len;
hdr_size += (tx_pkt->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) ?
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.950222905 +0800
+++ 0056-net-bnxt-fix-TCP-and-UDP-checksum-flags.patch 2024-12-06 23:26:43.963044828 +0800
@@ -1 +1 @@
-From 4c0451197e5a88531c30398b58b7e5601be90080 Mon Sep 17 00:00:00 2001
+From e8a76c3869c12908664c09e47a5963c5af24b7df Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4c0451197e5a88531c30398b58b7e5601be90080 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 12e4faa8fa..38f858f27f 100644
+index cef14427a8..67e8a9990b 100644
@@ -24 +26 @@
-@@ -319,7 +319,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
+@@ -313,7 +313,9 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnxt: fix bad action offset in Tx BD' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (55 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt: fix TCP and UDP checksum flags' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: remove dead conditional' " Xueming Li
` (39 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Peter Spreadborough
Cc: Xueming Li, Kishore Padmanabha, Ajit Khaparde, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1cc98f6f93302133b9b735a9f9b09267f9fb659a
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1cc98f6f93302133b9b735a9f9b09267f9fb659a Mon Sep 17 00:00:00 2001
From: Peter Spreadborough <peter.spreadborough@broadcom.com>
Date: Tue, 16 Apr 2024 14:15:56 -0400
Subject: [PATCH] net/bnxt: fix bad action offset in Tx BD
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b019ddf9b1de65491b4c07c25bbab3dc70c15f79 ]
This change ensures that the high part of an action table entry
offset stored in the Tx BD is set correctly. A bad value will
cause the PDCU to abort a fetch an may stall the pipeline.
Fixes: 527b10089cc5 ("net/bnxt: optimize Tx completion handling")
Signed-off-by: Peter Spreadborough <peter.spreadborough@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
---
drivers/net/bnxt/bnxt_txr.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/drivers/net/bnxt/bnxt_txr.c b/drivers/net/bnxt/bnxt_txr.c
index 67e8a9990b..6500738ff2 100644
--- a/drivers/net/bnxt/bnxt_txr.c
+++ b/drivers/net/bnxt/bnxt_txr.c
@@ -303,10 +303,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
*/
txbd1->kid_or_ts_high_mss = 0;
- if (txq->vfr_tx_cfa_action)
- txbd1->cfa_action = txq->vfr_tx_cfa_action;
- else
- txbd1->cfa_action = txq->bp->tx_cfa_action;
+ if (txq->vfr_tx_cfa_action) {
+ txbd1->cfa_action = txq->vfr_tx_cfa_action & 0xffff;
+ txbd1->cfa_action_high = (txq->vfr_tx_cfa_action >> 16) &
+ TX_BD_LONG_CFA_ACTION_HIGH_MASK;
+ } else {
+ txbd1->cfa_action = txq->bp->tx_cfa_action & 0xffff;
+ txbd1->cfa_action_high = (txq->bp->tx_cfa_action >> 16) &
+ TX_BD_LONG_CFA_ACTION_HIGH_MASK;
+ }
if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
uint16_t hdr_size;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:45.976852005 +0800
+++ 0057-net-bnxt-fix-bad-action-offset-in-Tx-BD.patch 2024-12-06 23:26:43.973044828 +0800
@@ -1 +1 @@
-From b019ddf9b1de65491b4c07c25bbab3dc70c15f79 Mon Sep 17 00:00:00 2001
+From 1cc98f6f93302133b9b735a9f9b09267f9fb659a Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b019ddf9b1de65491b4c07c25bbab3dc70c15f79 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 38f858f27f..c82b11e733 100644
+index 67e8a9990b..6500738ff2 100644
@@ -24 +26 @@
-@@ -308,10 +308,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
+@@ -303,10 +303,15 @@ static uint16_t bnxt_start_xmit(struct rte_mbuf *tx_pkt,
@@ -42,2 +44,2 @@
- if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG ||
- tx_pkt->ol_flags & RTE_MBUF_F_TX_UDP_SEG) {
+ if (tx_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG) {
+ uint16_t hdr_size;
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnx2x: remove dead conditional' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (56 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnxt: fix bad action offset in Tx BD' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: fix always true expression' " Xueming Li
` (38 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9643a94a10f8243dc268b83311648e52b92c8900
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9643a94a10f8243dc268b83311648e52b92c8900 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 12 Nov 2024 09:43:53 -0800
Subject: [PATCH] net/bnx2x: remove dead conditional
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3868c0ce5ce83eacc9611cc4a83d20120ae3442e ]
The second if test here is impossible because it contradicts
previous line.
Coverity issue: 384428
Fixes: 540a211084a7 ("bnx2x: driver core")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/bnx2x/bnx2x_stats.c | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c
index 69132c7c80..72a26ed5cc 100644
--- a/drivers/net/bnx2x/bnx2x_stats.c
+++ b/drivers/net/bnx2x/bnx2x_stats.c
@@ -75,10 +75,6 @@ bnx2x_storm_stats_post(struct bnx2x_softc *sc)
int rc;
if (!sc->stats_pending) {
- if (sc->stats_pending) {
- return;
- }
-
sc->fw_stats_req->hdr.drv_stats_counter =
htole16(sc->stats_counter++);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.001370505 +0800
+++ 0058-net-bnx2x-remove-dead-conditional.patch 2024-12-06 23:26:43.973044828 +0800
@@ -1 +1 @@
-From 3868c0ce5ce83eacc9611cc4a83d20120ae3442e Mon Sep 17 00:00:00 2001
+From 9643a94a10f8243dc268b83311648e52b92c8900 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3868c0ce5ce83eacc9611cc4a83d20120ae3442e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -15,2 +17,2 @@
- drivers/net/bnx2x/bnx2x_stats.c | 3 ---
- 1 file changed, 3 deletions(-)
+ drivers/net/bnx2x/bnx2x_stats.c | 4 ----
+ 1 file changed, 4 deletions(-)
@@ -19 +21 @@
-index d473c5e7ec..8adbe7e381 100644
+index 69132c7c80..72a26ed5cc 100644
@@ -22 +24 @@
-@@ -73,9 +73,6 @@ bnx2x_storm_stats_post(struct bnx2x_softc *sc)
+@@ -75,10 +75,6 @@ bnx2x_storm_stats_post(struct bnx2x_softc *sc)
@@ -26 +28 @@
-- if (sc->stats_pending)
+- if (sc->stats_pending) {
@@ -27,0 +30 @@
+- }
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnx2x: fix always true expression' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (57 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnx2x: remove dead conditional' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: fix possible infinite loop at startup' " Xueming Li
` (37 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=12ae6d06a38b50392c85c7ccb28278e61f9f4116
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 12ae6d06a38b50392c85c7ccb28278e61f9f4116 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 12 Nov 2024 09:43:54 -0800
Subject: [PATCH] net/bnx2x: fix always true expression
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit fb6b0e9a36326a4f13f496b00f7f92aaffe1d5f4 ]
Coverity spotted that the check to enable single interrupt
mode would evaluate as always true since:
The or condition sc->interrupt_mode != 2 || sc->interrupt_mode != 3
will always be true because sc->interrupt_mode cannot be equal to
two different values at the same time, so it must be not equal to
at least one of them.
Coverity issue: 362046
Fixes: 540a211084a7 ("bnx2x: driver core")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/bnx2x/bnx2x.c | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index 3153cc4d80..af31ac4604 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -11189,11 +11189,9 @@ static int bnx2x_init_hw_func(struct bnx2x_softc *sc)
/* Turn on a single ISR mode in IGU if driver is going to use
* INT#x or MSI
*/
- if ((sc->interrupt_mode != INTR_MODE_MSIX)
- || (sc->interrupt_mode != INTR_MODE_SINGLE_MSIX)) {
+ if (sc->interrupt_mode == INTR_MODE_INTX ||
+ sc->interrupt_mode == INTR_MODE_MSI)
pf_conf |= IGU_PF_CONF_SINGLE_ISR_EN;
- }
-
/*
* Timers workaround bug: function init part.
* Need to wait 20msec after initializing ILT,
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.026329004 +0800
+++ 0059-net-bnx2x-fix-always-true-expression.patch 2024-12-06 23:26:43.973044828 +0800
@@ -1 +1 @@
-From fb6b0e9a36326a4f13f496b00f7f92aaffe1d5f4 Mon Sep 17 00:00:00 2001
+From 12ae6d06a38b50392c85c7ccb28278e61f9f4116 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit fb6b0e9a36326a4f13f496b00f7f92aaffe1d5f4 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnx2x: fix possible infinite loop at startup' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (58 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnx2x: fix always true expression' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: fix duplicate branch' " Xueming Li
` (36 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7315de38074fb4a9087a3d3696b1318f68fcc1c1
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7315de38074fb4a9087a3d3696b1318f68fcc1c1 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 12 Nov 2024 09:43:55 -0800
Subject: [PATCH] net/bnx2x: fix possible infinite loop at startup
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a47272b052dd1c8c571a1c0b89b56aaa3ebf4351 ]
Coverity spotted that one of the loop conditions was always true.
Fix by initializing the variable using same logic as Linux
kernel driver.
Coverity issue: 362057
Fixes: 540a211084a7 ("bnx2x: driver core")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/bnx2x/bnx2x.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index af31ac4604..d96fcb55c9 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -10331,12 +10331,13 @@ static int bnx2x_init_hw_common(struct bnx2x_softc *sc)
REG_WR(sc, PXP2_REG_RD_DISABLE_INPUTS, 0);
if (!CHIP_IS_E1x(sc)) {
- int factor = 0;
+ int factor = CHIP_REV_IS_EMUL(sc) ? 1000 :
+ (CHIP_REV_IS_FPGA(sc) ? 400 : 0);
ecore_init_block(sc, BLOCK_PGLUE_B, PHASE_COMMON);
ecore_init_block(sc, BLOCK_ATC, PHASE_COMMON);
-/* let the HW do it's magic... */
+ /* let the HW do it's magic... */
do {
DELAY(200000);
val = REG_RD(sc, ATC_REG_ATC_INIT_DONE);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.058237404 +0800
+++ 0060-net-bnx2x-fix-possible-infinite-loop-at-startup.patch 2024-12-06 23:26:43.973044828 +0800
@@ -1 +1 @@
-From a47272b052dd1c8c571a1c0b89b56aaa3ebf4351 Mon Sep 17 00:00:00 2001
+From 7315de38074fb4a9087a3d3696b1318f68fcc1c1 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a47272b052dd1c8c571a1c0b89b56aaa3ebf4351 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/bnx2x: fix duplicate branch' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (59 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnx2x: fix possible infinite loop at startup' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'common/cnxk: fix build on Ubuntu 24.04' " Xueming Li
` (35 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=17adb7d627eb5096c5396625cfe6ba258f747893
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 17adb7d627eb5096c5396625cfe6ba258f747893 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 12 Nov 2024 09:43:56 -0800
Subject: [PATCH] net/bnx2x: fix duplicate branch
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 87e210eb086f49f32733c579003b9565e46535d7 ]
Coverity spotted that both legs of the conditional are the same.
Looking at kernel driver there is additional code there, but the
kernel driver supports Wake On Lan, and DPDK does not.
Coverity issue: 362072
Fixes: 540a211084a7 ("bnx2x: driver core")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
drivers/net/bnx2x/bnx2x.c | 8 ++------
1 file changed, 2 insertions(+), 6 deletions(-)
diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c
index d96fcb55c9..51e5cabf7b 100644
--- a/drivers/net/bnx2x/bnx2x.c
+++ b/drivers/net/bnx2x/bnx2x.c
@@ -1623,16 +1623,12 @@ static int bnx2x_nic_unload_no_mcp(struct bnx2x_softc *sc)
}
/* request unload mode from the MCP: COMMON, PORT or FUNCTION */
-static uint32_t bnx2x_send_unload_req(struct bnx2x_softc *sc, int unload_mode)
+static uint32_t bnx2x_send_unload_req(struct bnx2x_softc *sc, int unload_mode __rte_unused)
{
uint32_t reset_code = 0;
/* Select the UNLOAD request mode */
- if (unload_mode == UNLOAD_NORMAL) {
- reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS;
- } else {
- reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS;
- }
+ reset_code = DRV_MSG_CODE_UNLOAD_REQ_WOL_DIS;
/* Send the request to the MCP */
if (!BNX2X_NOMCP(sc)) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.097480404 +0800
+++ 0061-net-bnx2x-fix-duplicate-branch.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From 87e210eb086f49f32733c579003b9565e46535d7 Mon Sep 17 00:00:00 2001
+From 17adb7d627eb5096c5396625cfe6ba258f747893 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 87e210eb086f49f32733c579003b9565e46535d7 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/cnxk: fix build on Ubuntu 24.04' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (60 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/bnx2x: fix duplicate branch' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/cnxk: " Xueming Li
` (34 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=5506c320807e3b89e8120100590d8f1c5a3eef93
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 5506c320807e3b89e8120100590d8f1c5a3eef93 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com>
Date: Thu, 14 Nov 2024 13:08:14 +0530
Subject: [PATCH] common/cnxk: fix build on Ubuntu 24.04
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 20c29a0e4602b9c7be5ea299457f909846c3785d ]
Due to different datatypes, warnings are thrown for writing
on space more than its size.
Bugzilla ID: 1513
Fixes: 39ac394aa7a8 ("common/cnxk: fix device MSI-X greater than default value")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/common/cnxk/roc_irq.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/common/cnxk/roc_irq.c b/drivers/common/cnxk/roc_irq.c
index a709c4047d..0b21b9e2d9 100644
--- a/drivers/common/cnxk/roc_irq.c
+++ b/drivers/common/cnxk/roc_irq.c
@@ -15,7 +15,7 @@
#define MSIX_IRQ_SET_BUF_LEN \
(sizeof(struct vfio_irq_set) + sizeof(int) * \
- (plt_intr_max_intr_get(intr_handle)))
+ ((uint32_t)plt_intr_max_intr_get(intr_handle)))
static int
irq_get_info(struct plt_intr_handle *intr_handle)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.132733103 +0800
+++ 0062-common-cnxk-fix-build-on-Ubuntu-24.04.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From 20c29a0e4602b9c7be5ea299457f909846c3785d Mon Sep 17 00:00:00 2001
+From 5506c320807e3b89e8120100590d8f1c5a3eef93 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 20c29a0e4602b9c7be5ea299457f909846c3785d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/cnxk: fix build on Ubuntu 24.04' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (61 preceding siblings ...)
2024-12-07 8:00 ` patch 'common/cnxk: fix build on Ubuntu 24.04' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-09 5:42 ` [EXTERNAL] " Sunil Kumar Kori
2024-12-09 5:42 ` Sunil Kumar Kori
2024-12-07 8:00 ` patch 'examples/l2fwd-event: fix spinlock handling' " Xueming Li
` (33 subsequent siblings)
96 siblings, 2 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Sunil Kumar Kori; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d39e6e67897cafd530b554985801a9f8f7092012
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d39e6e67897cafd530b554985801a9f8f7092012 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com>
Date: Thu, 14 Nov 2024 13:08:16 +0530
Subject: [PATCH] net/cnxk: fix build on Ubuntu 24.04
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b9799fb5e7a38c824c91b88d3c89250d23c783e6 ]
Due to implicit unsigned to signed integer conversion, actual value gets
wrapped and becomes higher than its size.
Bugzilla ID: 1513
Fixes: 03b152389fb1 ("net/cnxk: add option to enable custom inbound SA")
Fixes: 7df4ead35436 ("net/cnxk: support parsing custom SA action")
Fixes: 47cca253d605 ("net/cnxk: support Rx inject")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
---
drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index a0e9300cff..8c022e5f08 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -303,8 +303,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
uint16_t custom_sa_act = 0;
struct rte_kvargs *kvlist;
uint32_t meta_buf_sz = 0;
+ uint16_t lock_rx_ctx = 0;
uint16_t no_inl_dev = 0;
- uint8_t lock_rx_ctx = 0;
memset(&sdp_chan, 0, sizeof(sdp_chan));
memset(&pre_l2_info, 0, sizeof(struct flow_pre_l2_size_info));
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.164810003 +0800
+++ 0063-net-cnxk-fix-build-on-Ubuntu-24.04.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From b9799fb5e7a38c824c91b88d3c89250d23c783e6 Mon Sep 17 00:00:00 2001
+From d39e6e67897cafd530b554985801a9f8f7092012 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b9799fb5e7a38c824c91b88d3c89250d23c783e6 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -17,2 +19,2 @@
- drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
+ drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
@@ -21 +23 @@
-index 5bd50bb9a1..ecc2ea8b77 100644
+index a0e9300cff..8c022e5f08 100644
@@ -24,3 +26 @@
-@@ -305,12 +305,12 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
- uint16_t scalar_enable = 0;
- uint16_t tx_compl_ena = 0;
+@@ -303,8 +303,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
@@ -28,2 +27,0 @@
-- uint8_t custom_inb_sa = 0;
-+ uint16_t custom_inb_sa = 0;
@@ -33 +30,0 @@
-+ uint16_t rx_inj_ena = 0;
@@ -36 +32,0 @@
-- uint8_t rx_inj_ena = 0;
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'examples/l2fwd-event: fix spinlock handling' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (62 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/cnxk: " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'eventdev: fix possible array underflow/overflow' " Xueming Li
` (32 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=da2874dd1fa992acaa1d3deb9b4f5c043e22640b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From da2874dd1fa992acaa1d3deb9b4f5c043e22640b Mon Sep 17 00:00:00 2001
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Date: Thu, 14 Nov 2024 13:14:36 +0530
Subject: [PATCH] examples/l2fwd-event: fix spinlock handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1f41deac447d7938198a2acdd1b7862161feef91 ]
Detected by pvs-studio
Bug 89-93: very suspicious synchronization
The analyzer issued a pack of V1020 warnings that a resource
might remain blocked.
Fixes: 080f57bceca4 ("examples/l2fwd-event: add eventdev main loop")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
examples/l2fwd-event/l2fwd_event.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/examples/l2fwd-event/l2fwd_event.c b/examples/l2fwd-event/l2fwd_event.c
index 4b5a032e35..78f10f31ad 100644
--- a/examples/l2fwd-event/l2fwd_event.c
+++ b/examples/l2fwd-event/l2fwd_event.c
@@ -141,6 +141,7 @@ l2fwd_get_free_event_port(struct l2fwd_event_resources *evt_rsrc)
rte_spinlock_lock(&evt_rsrc->evp.lock);
if (index >= evt_rsrc->evp.nb_ports) {
printf("No free event port is available\n");
+ rte_spinlock_unlock(&evt_rsrc->evp.lock);
return -1;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.199157602 +0800
+++ 0064-examples-l2fwd-event-fix-spinlock-handling.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From 1f41deac447d7938198a2acdd1b7862161feef91 Mon Sep 17 00:00:00 2001
+From da2874dd1fa992acaa1d3deb9b4f5c043e22640b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1f41deac447d7938198a2acdd1b7862161feef91 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 22472027b9..416957384b 100644
+index 4b5a032e35..78f10f31ad 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'eventdev: fix possible array underflow/overflow' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (63 preceding siblings ...)
2024-12-07 8:00 ` patch 'examples/l2fwd-event: fix spinlock handling' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/dpaa2: remove unnecessary check for null before free' " Xueming Li
` (31 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Bruce Richardson; +Cc: Xueming Li, Jerin Jacob, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=717465a84d31115f76a9ca33fb09d0bcceff6255
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 717465a84d31115f76a9ca33fb09d0bcceff6255 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Thu, 14 Nov 2024 11:55:38 +0000
Subject: [PATCH] eventdev: fix possible array underflow/overflow
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 952b24bd0475450e548d4aafae7d8cf48258402b ]
If the number of interrupts is zero, then indexing an array by
"nb_rx_intr - 1" will cause an out-of-bounds write Fix this by putting
in a check that nb_rx_intr > 0 before doing the array write.
Coverity issue: 448870
Fixes: 3810ae435783 ("eventdev: add interrupt driven queues to Rx adapter")
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
lib/eventdev/rte_event_eth_rx_adapter.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/eventdev/rte_event_eth_rx_adapter.c b/lib/eventdev/rte_event_eth_rx_adapter.c
index 1b83a55b5c..bdcc3e3539 100644
--- a/lib/eventdev/rte_event_eth_rx_adapter.c
+++ b/lib/eventdev/rte_event_eth_rx_adapter.c
@@ -2299,7 +2299,7 @@ rxa_sw_add(struct event_eth_rx_adapter *rx_adapter, uint16_t eth_dev_id,
for (i = 0; i < dev_info->dev->data->nb_rx_queues; i++)
dev_info->intr_queue[i] = i;
} else {
- if (!rxa_intr_queue(dev_info, rx_queue_id))
+ if (!rxa_intr_queue(dev_info, rx_queue_id) && nb_rx_intr > 0)
dev_info->intr_queue[nb_rx_intr - 1] =
rx_queue_id;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.230994302 +0800
+++ 0065-eventdev-fix-possible-array-underflow-overflow.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From 952b24bd0475450e548d4aafae7d8cf48258402b Mon Sep 17 00:00:00 2001
+From 717465a84d31115f76a9ca33fb09d0bcceff6255 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 952b24bd0475450e548d4aafae7d8cf48258402b ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 3ee20d95f3..39674c4604 100644
+index 1b83a55b5c..bdcc3e3539 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/dpaa2: remove unnecessary check for null before free' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (64 preceding siblings ...)
2024-12-07 8:00 ` patch 'eventdev: fix possible array underflow/overflow' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'common/mlx5: fix error CQE handling for 128 bytes CQE' " Xueming Li
` (30 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ea2df1d74f8adc168f548f8bf1f4dab38681168c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ea2df1d74f8adc168f548f8bf1f4dab38681168c Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Tue, 12 Nov 2024 09:38:02 -0800
Subject: [PATCH] net/dpaa2: remove unnecessary check for null before free
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit e6bf3256b95c77ee4d0b2874e1896d01c41c2d7c ]
Calling rte_free() with NULL parameter is allowed.
Found by nullfree.cocci
Fixes: 5964d36a2904 ("net/dpaa2: release port upon close")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
drivers/net/dpaa2/dpaa2_ethdev.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/drivers/net/dpaa2/dpaa2_ethdev.c b/drivers/net/dpaa2/dpaa2_ethdev.c
index c5b1f161fd..873121524f 100644
--- a/drivers/net/dpaa2/dpaa2_ethdev.c
+++ b/drivers/net/dpaa2/dpaa2_ethdev.c
@@ -1390,8 +1390,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
for (i = 0; i < MAX_TCS; i++)
rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
- if (priv->extract.qos_extract_param)
- rte_free((void *)(size_t)priv->extract.qos_extract_param);
+ rte_free((void *)(size_t)priv->extract.qos_extract_param);
DPAA2_PMD_INFO("%s: netdev deleted", dev->data->name);
return 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.260792702 +0800
+++ 0066-net-dpaa2-remove-unnecessary-check-for-null-before-f.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From e6bf3256b95c77ee4d0b2874e1896d01c41c2d7c Mon Sep 17 00:00:00 2001
+From ea2df1d74f8adc168f548f8bf1f4dab38681168c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit e6bf3256b95c77ee4d0b2874e1896d01c41c2d7c ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -15,4 +17,2 @@
- drivers/net/dpaa2/dpaa2_ethdev.c | 3 +--
- drivers/net/dpaa2/dpaa2_flow.c | 27 +++++++++------------------
- drivers/net/dpaa2/dpaa2_mux.c | 6 ++----
- 3 files changed, 12 insertions(+), 24 deletions(-)
+ drivers/net/dpaa2/dpaa2_ethdev.c | 3 +--
+ 1 file changed, 1 insertion(+), 2 deletions(-)
@@ -21 +21 @@
-index 8cbe481fb5..a9bce854c3 100644
+index c5b1f161fd..873121524f 100644
@@ -24 +24 @@
-@@ -1401,8 +1401,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
+@@ -1390,8 +1390,7 @@ dpaa2_dev_close(struct rte_eth_dev *dev)
@@ -26 +26 @@
- rte_free(priv->extract.tc_extract_param[i]);
+ rte_free((void *)(size_t)priv->extract.tc_extract_param[i]);
@@ -29,2 +29,2 @@
-- rte_free(priv->extract.qos_extract_param);
-+ rte_free(priv->extract.qos_extract_param);
+- rte_free((void *)(size_t)priv->extract.qos_extract_param);
++ rte_free((void *)(size_t)priv->extract.qos_extract_param);
@@ -34,69 +33,0 @@
-diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
-index de850ae0cf..c94eb51ba5 100644
---- a/drivers/net/dpaa2/dpaa2_flow.c
-+++ b/drivers/net/dpaa2/dpaa2_flow.c
-@@ -4784,8 +4784,7 @@ end_flow_set:
- }
- }
-
-- if (dpaa2_pattern)
-- rte_free(dpaa2_pattern);
-+ rte_free(dpaa2_pattern);
-
- return ret;
- }
-@@ -5057,14 +5056,10 @@ mem_failure:
-
- creation_error:
- if (flow) {
-- if (flow->qos_key_addr)
-- rte_free(flow->qos_key_addr);
-- if (flow->qos_mask_addr)
-- rte_free(flow->qos_mask_addr);
-- if (flow->fs_key_addr)
-- rte_free(flow->fs_key_addr);
-- if (flow->fs_mask_addr)
-- rte_free(flow->fs_mask_addr);
-+ rte_free(flow->qos_key_addr);
-+ rte_free(flow->qos_mask_addr);
-+ rte_free(flow->fs_key_addr);
-+ rte_free(flow->fs_mask_addr);
- rte_free(flow);
- }
- priv->curr = NULL;
-@@ -5128,14 +5123,10 @@ dpaa2_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *_flow,
- }
-
- LIST_REMOVE(flow, next);
-- if (flow->qos_key_addr)
-- rte_free(flow->qos_key_addr);
-- if (flow->qos_mask_addr)
-- rte_free(flow->qos_mask_addr);
-- if (flow->fs_key_addr)
-- rte_free(flow->fs_key_addr);
-- if (flow->fs_mask_addr)
-- rte_free(flow->fs_mask_addr);
-+ rte_free(flow->qos_key_addr);
-+ rte_free(flow->qos_mask_addr);
-+ rte_free(flow->fs_key_addr);
-+ rte_free(flow->fs_mask_addr);
- /* Now free the flow */
- rte_free(flow);
-
-diff --git a/drivers/net/dpaa2/dpaa2_mux.c b/drivers/net/dpaa2/dpaa2_mux.c
-index e9d48a81a8..2f124313fa 100644
---- a/drivers/net/dpaa2/dpaa2_mux.c
-+++ b/drivers/net/dpaa2/dpaa2_mux.c
-@@ -329,10 +329,8 @@ rte_pmd_dpaa2_mux_flow_create(uint32_t dpdmux_id,
- }
-
- creation_error:
-- if (key_cfg_va)
-- rte_free(key_cfg_va);
-- if (key_va)
-- rte_free(key_va);
-+ rte_free(key_cfg_va);
-+ rte_free(key_va);
-
- return ret;
- }
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/mlx5: fix error CQE handling for 128 bytes CQE' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (65 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/dpaa2: remove unnecessary check for null before free' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix shared queue port number in vector Rx' " Xueming Li
` (29 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Alexander Kozyrev; +Cc: Xueming Li, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ef327e576d3928c04767e3832406fcd5822bd9c0
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ef327e576d3928c04767e3832406fcd5822bd9c0 Mon Sep 17 00:00:00 2001
From: Alexander Kozyrev <akozyrev@nvidia.com>
Date: Mon, 28 Oct 2024 19:17:07 +0200
Subject: [PATCH] common/mlx5: fix error CQE handling for 128 bytes CQE
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3cddeba0ca38b00c7dc646277484d08a4cb2d862 ]
The completion queue element size can be independently configured
to report either 64 or 128 bytes CQEs by programming cqe_sz parameter
at CQ creation. This parameter depends on the cache line size and
affects both regular CQEs and error CQEs. But the error handling
assumes that an error CQE is 64 bytes and doesn't take the padding
into consideration on platforms with 128-byte cache lines.
Fix the error CQE size in all error handling routines in mlx5.
Fixes: 957e45fb7bcb ("net/mlx5: handle Tx completion with error")
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/common/mlx5/mlx5_prm.h | 29 ++++++++++++++++++++-
drivers/common/mlx5/windows/mlx5_win_defs.h | 12 ---------
drivers/compress/mlx5/mlx5_compress.c | 4 +--
drivers/crypto/mlx5/mlx5_crypto_gcm.c | 2 +-
drivers/crypto/mlx5/mlx5_crypto_xts.c | 2 +-
drivers/net/mlx5/mlx5_flow_aso.c | 6 ++---
drivers/net/mlx5/mlx5_rx.c | 2 +-
drivers/net/mlx5/mlx5_tx.c | 8 +++---
8 files changed, 40 insertions(+), 25 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 3cbb1179c0..0875256e36 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -268,8 +268,12 @@
/* Maximum number of DS in WQE. Limited by 6-bit field. */
#define MLX5_DSEG_MAX 63
-/* The 32 bit syndrome offset in struct mlx5_err_cqe. */
+/* The 32 bit syndrome offset in struct mlx5_error_cqe. */
+#if (RTE_CACHE_LINE_SIZE == 128)
+#define MLX5_ERROR_CQE_SYNDROME_OFFSET 116
+#else
#define MLX5_ERROR_CQE_SYNDROME_OFFSET 52
+#endif
/* The completion mode offset in the WQE control segment line 2. */
#define MLX5_COMP_MODE_OFFSET 2
@@ -415,6 +419,29 @@ struct mlx5_wqe_mprq {
#define MLX5_MPRQ_STRIDE_SHIFT_BYTE 2
+struct mlx5_error_cqe {
+#if (RTE_CACHE_LINE_SIZE == 128)
+ uint8_t padding[64];
+#endif
+ uint8_t rsvd0[2];
+ uint16_t eth_wqe_id;
+ uint8_t rsvd1[16];
+ uint16_t ib_stride_index;
+ uint8_t rsvd2[10];
+ uint32_t srqn;
+ uint8_t rsvd3[8];
+ uint32_t byte_cnt;
+ uint8_t rsvd4[4];
+ uint8_t hw_err_synd;
+ uint8_t hw_synd_type;
+ uint8_t vendor_err_synd;
+ uint8_t syndrome;
+ uint32_t s_wqe_opcode_qpn;
+ uint16_t wqe_counter;
+ uint8_t signature;
+ uint8_t op_own;
+};
+
/* CQ element structure - should be equal to the cache line size */
struct mlx5_cqe {
#if (RTE_CACHE_LINE_SIZE == 128)
diff --git a/drivers/common/mlx5/windows/mlx5_win_defs.h b/drivers/common/mlx5/windows/mlx5_win_defs.h
index 79e7a7f386..d60df6fd37 100644
--- a/drivers/common/mlx5/windows/mlx5_win_defs.h
+++ b/drivers/common/mlx5/windows/mlx5_win_defs.h
@@ -219,18 +219,6 @@ struct mlx5_action {
} dest_tir;
};
-struct mlx5_err_cqe {
- uint8_t rsvd0[32];
- uint32_t srqn;
- uint8_t rsvd1[18];
- uint8_t vendor_err_synd;
- uint8_t syndrome;
- uint32_t s_wqe_opcode_qpn;
- uint16_t wqe_counter;
- uint8_t signature;
- uint8_t op_own;
-};
-
struct mlx5_wqe_srq_next_seg {
uint8_t rsvd0[2];
rte_be16_t next_wqe_index;
diff --git a/drivers/compress/mlx5/mlx5_compress.c b/drivers/compress/mlx5/mlx5_compress.c
index 41d9752833..702108c5f9 100644
--- a/drivers/compress/mlx5/mlx5_compress.c
+++ b/drivers/compress/mlx5/mlx5_compress.c
@@ -602,7 +602,7 @@ mlx5_compress_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe,
size_t i;
DRV_LOG(ERR, "Error cqe:");
- for (i = 0; i < sizeof(struct mlx5_err_cqe) >> 2; i += 4)
+ for (i = 0; i < sizeof(struct mlx5_error_cqe) >> 2; i += 4)
DRV_LOG(ERR, "%08X %08X %08X %08X", cqe[i], cqe[i + 1],
cqe[i + 2], cqe[i + 3]);
DRV_LOG(ERR, "\nError wqe:");
@@ -620,7 +620,7 @@ mlx5_compress_cqe_err_handle(struct mlx5_compress_qp *qp,
struct rte_comp_op *op)
{
const uint32_t idx = qp->ci & (qp->entries_n - 1);
- volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *)
+ volatile struct mlx5_error_cqe *cqe = (volatile struct mlx5_error_cqe *)
&qp->cq.cqes[idx];
volatile struct mlx5_gga_wqe *wqes = (volatile struct mlx5_gga_wqe *)
qp->qp.wqes;
diff --git a/drivers/crypto/mlx5/mlx5_crypto_gcm.c b/drivers/crypto/mlx5/mlx5_crypto_gcm.c
index 8b9953b46d..9b6c8dc4d5 100644
--- a/drivers/crypto/mlx5/mlx5_crypto_gcm.c
+++ b/drivers/crypto/mlx5/mlx5_crypto_gcm.c
@@ -856,7 +856,7 @@ mlx5_crypto_gcm_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *
{
uint8_t op_code;
const uint32_t idx = qp->cq_ci & (qp->entries_n - 1);
- volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *)
+ volatile struct mlx5_error_cqe *cqe = (volatile struct mlx5_error_cqe *)
&qp->cq_obj.cqes[idx];
op_code = rte_be_to_cpu_32(cqe->s_wqe_opcode_qpn) >> MLX5_CQ_INDEX_WIDTH;
diff --git a/drivers/crypto/mlx5/mlx5_crypto_xts.c b/drivers/crypto/mlx5/mlx5_crypto_xts.c
index d4e1dd718c..b9214711ac 100644
--- a/drivers/crypto/mlx5/mlx5_crypto_xts.c
+++ b/drivers/crypto/mlx5/mlx5_crypto_xts.c
@@ -363,7 +363,7 @@ static __rte_noinline void
mlx5_crypto_xts_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *op)
{
const uint32_t idx = qp->ci & (qp->entries_n - 1);
- volatile struct mlx5_err_cqe *cqe = (volatile struct mlx5_err_cqe *)
+ volatile struct mlx5_error_cqe *cqe = (volatile struct mlx5_error_cqe *)
&qp->cq_obj.cqes[idx];
op->status = RTE_CRYPTO_OP_STATUS_ERROR;
diff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c
index ab9eb21e01..b78d80ab44 100644
--- a/drivers/net/mlx5/mlx5_flow_aso.c
+++ b/drivers/net/mlx5/mlx5_flow_aso.c
@@ -489,7 +489,7 @@ mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe)
int i;
DRV_LOG(ERR, "Error cqe:");
- for (i = 0; i < 16; i += 4)
+ for (i = 0; i < (int)sizeof(struct mlx5_error_cqe) / 4; i += 4)
DRV_LOG(ERR, "%08X %08X %08X %08X", cqe[i], cqe[i + 1],
cqe[i + 2], cqe[i + 3]);
DRV_LOG(ERR, "\nError wqe:");
@@ -509,8 +509,8 @@ mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq)
{
struct mlx5_aso_cq *cq = &sq->cq;
uint32_t idx = cq->cq_ci & ((1 << cq->log_desc_n) - 1);
- volatile struct mlx5_err_cqe *cqe =
- (volatile struct mlx5_err_cqe *)&cq->cq_obj.cqes[idx];
+ volatile struct mlx5_error_cqe *cqe =
+ (volatile struct mlx5_error_cqe *)&cq->cq_obj.cqes[idx];
cq->errors++;
idx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);
diff --git a/drivers/net/mlx5/mlx5_rx.c b/drivers/net/mlx5/mlx5_rx.c
index cc087348a4..86a7e090a1 100644
--- a/drivers/net/mlx5/mlx5_rx.c
+++ b/drivers/net/mlx5/mlx5_rx.c
@@ -459,7 +459,7 @@ mlx5_rx_err_handle(struct mlx5_rxq_data *rxq, uint8_t vec,
container_of(rxq, struct mlx5_rxq_ctrl, rxq);
union {
volatile struct mlx5_cqe *cqe;
- volatile struct mlx5_err_cqe *err_cqe;
+ volatile struct mlx5_error_cqe *err_cqe;
} u = {
.cqe = &(*rxq->cqes)[(rxq->cq_ci - vec) & cqe_mask],
};
diff --git a/drivers/net/mlx5/mlx5_tx.c b/drivers/net/mlx5/mlx5_tx.c
index 1fe9521dfc..4148d6d899 100644
--- a/drivers/net/mlx5/mlx5_tx.c
+++ b/drivers/net/mlx5/mlx5_tx.c
@@ -55,7 +55,7 @@ tx_recover_qp(struct mlx5_txq_ctrl *txq_ctrl)
/* Return 1 if the error CQE is signed otherwise, sign it and return 0. */
static int
-check_err_cqe_seen(volatile struct mlx5_err_cqe *err_cqe)
+check_err_cqe_seen(volatile struct mlx5_error_cqe *err_cqe)
{
static const uint8_t magic[] = "seen";
int ret = 1;
@@ -83,7 +83,7 @@ check_err_cqe_seen(volatile struct mlx5_err_cqe *err_cqe)
*/
static int
mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
- volatile struct mlx5_err_cqe *err_cqe)
+ volatile struct mlx5_error_cqe *err_cqe)
{
if (err_cqe->syndrome != MLX5_CQE_SYNDROME_WR_FLUSH_ERR) {
const uint16_t wqe_m = ((1 << txq->wqe_n) - 1);
@@ -107,7 +107,7 @@ mlx5_tx_error_cqe_handle(struct mlx5_txq_data *__rte_restrict txq,
mlx5_dump_debug_information(name, "MLX5 Error CQ:",
(const void *)((uintptr_t)
txq->cqes),
- sizeof(struct mlx5_cqe) *
+ sizeof(struct mlx5_error_cqe) *
(1 << txq->cqe_n));
mlx5_dump_debug_information(name, "MLX5 Error SQ:",
(const void *)((uintptr_t)
@@ -206,7 +206,7 @@ mlx5_tx_handle_completion(struct mlx5_txq_data *__rte_restrict txq,
*/
rte_wmb();
ret = mlx5_tx_error_cqe_handle
- (txq, (volatile struct mlx5_err_cqe *)cqe);
+ (txq, (volatile struct mlx5_error_cqe *)cqe);
if (unlikely(ret < 0)) {
/*
* Some error occurred on queue error
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.290431701 +0800
+++ 0067-common-mlx5-fix-error-CQE-handling-for-128-bytes-CQE.patch 2024-12-06 23:26:43.993044827 +0800
@@ -1 +1 @@
-From 3cddeba0ca38b00c7dc646277484d08a4cb2d862 Mon Sep 17 00:00:00 2001
+From ef327e576d3928c04767e3832406fcd5822bd9c0 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3cddeba0ca38b00c7dc646277484d08a4cb2d862 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -25 +26,0 @@
- drivers/net/mlx5/hws/mlx5dr_send.c | 2 +-
@@ -29 +30 @@
- 9 files changed, 41 insertions(+), 26 deletions(-)
+ 8 files changed, 40 insertions(+), 25 deletions(-)
@@ -32 +33 @@
-index 359f02f17c..210158350d 100644
+index 3cbb1179c0..0875256e36 100644
@@ -103 +104 @@
-index 5998d060e4..82105bfebd 100644
+index 41d9752833..702108c5f9 100644
@@ -125 +126 @@
-index f598273873..cd21605bd2 100644
+index 8b9953b46d..9b6c8dc4d5 100644
@@ -128 +129 @@
-@@ -877,7 +877,7 @@ mlx5_crypto_gcm_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *
+@@ -856,7 +856,7 @@ mlx5_crypto_gcm_cqe_err_handle(struct mlx5_crypto_qp *qp, struct rte_crypto_op *
@@ -150,13 +150,0 @@
-diff --git a/drivers/net/mlx5/hws/mlx5dr_send.c b/drivers/net/mlx5/hws/mlx5dr_send.c
-index e9abf3dddb..e121c7f7ed 100644
---- a/drivers/net/mlx5/hws/mlx5dr_send.c
-+++ b/drivers/net/mlx5/hws/mlx5dr_send.c
-@@ -599,7 +599,7 @@ static void mlx5dr_send_engine_poll_cq(struct mlx5dr_send_engine *queue,
- return;
-
- if (unlikely(cqe_opcode != MLX5_CQE_REQ)) {
-- struct mlx5_err_cqe *err_cqe = (struct mlx5_err_cqe *)cqe;
-+ struct mlx5_error_cqe *err_cqe = (struct mlx5_error_cqe *)cqe;
-
- DR_LOG(ERR, "CQE ERR:0x%x, Vendor_ERR:0x%x, OP:0x%x, QPN:0x%x, WQE_CNT:0x%x",
- err_cqe->syndrome, err_cqe->vendor_err_synd, cqe_opcode,
@@ -164 +152 @@
-index a94b228396..feca8c3e89 100644
+index ab9eb21e01..b78d80ab44 100644
@@ -188 +176 @@
-index f241809e08..5e58eb8bc9 100644
+index cc087348a4..86a7e090a1 100644
@@ -201 +189 @@
-index fc105970a3..4286876e12 100644
+index 1fe9521dfc..4148d6d899 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix shared queue port number in vector Rx' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (66 preceding siblings ...)
2024-12-07 8:00 ` patch 'common/mlx5: fix error CQE handling for 128 bytes CQE' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5/hws: fix allocation of STCs' " Xueming Li
` (28 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Alexander Kozyrev; +Cc: Xueming Li, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=200ea61fba771b8fe4a8ae6f2abf9827a427f8f6
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 200ea61fba771b8fe4a8ae6f2abf9827a427f8f6 Mon Sep 17 00:00:00 2001
From: Alexander Kozyrev <akozyrev@nvidia.com>
Date: Mon, 28 Oct 2024 19:53:54 +0200
Subject: [PATCH] net/mlx5: fix shared queue port number in vector Rx
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3638f431b9ff39003e31c3a761d407e04b25576a ]
Wrong CQE is used to get the shared Rx queue port number in
vectorized Rx burst routine. Fix the CQE indexing.
Fixes: 25ed2ebff131 ("net/mlx5: support shared Rx queue port data path")
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 12 ++++++------
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 24 ++++++++++++------------
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 6 +++---
3 files changed, 21 insertions(+), 21 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index cccfa7f2d3..f6e74f4180 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -1249,9 +1249,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]);
if (unlikely(rxq->shared)) {
pkts[pos]->port = cq[pos].user_index_low;
- pkts[pos + p1]->port = cq[pos + p1].user_index_low;
- pkts[pos + p2]->port = cq[pos + p2].user_index_low;
- pkts[pos + p3]->port = cq[pos + p3].user_index_low;
+ pkts[pos + 1]->port = cq[pos + p1].user_index_low;
+ pkts[pos + 2]->port = cq[pos + p2].user_index_low;
+ pkts[pos + 3]->port = cq[pos + p3].user_index_low;
}
if (rxq->hw_timestamp) {
int offset = rxq->timestamp_offset;
@@ -1295,17 +1295,17 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
metadata;
pkts[pos]->ol_flags |= metadata ? flag : 0ULL;
metadata = rte_be_to_cpu_32
- (cq[pos + 1].flow_table_metadata) & mask;
+ (cq[pos + p1].flow_table_metadata) & mask;
*RTE_MBUF_DYNFIELD(pkts[pos + 1], offs, uint32_t *) =
metadata;
pkts[pos + 1]->ol_flags |= metadata ? flag : 0ULL;
metadata = rte_be_to_cpu_32
- (cq[pos + 2].flow_table_metadata) & mask;
+ (cq[pos + p2].flow_table_metadata) & mask;
*RTE_MBUF_DYNFIELD(pkts[pos + 2], offs, uint32_t *) =
metadata;
pkts[pos + 2]->ol_flags |= metadata ? flag : 0ULL;
metadata = rte_be_to_cpu_32
- (cq[pos + 3].flow_table_metadata) & mask;
+ (cq[pos + p3].flow_table_metadata) & mask;
*RTE_MBUF_DYNFIELD(pkts[pos + 3], offs, uint32_t *) =
metadata;
pkts[pos + 3]->ol_flags |= metadata ? flag : 0ULL;
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 3ed688191f..942d395dc9 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -835,13 +835,13 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
rxq_cq_to_ptype_oflags_v(rxq, ptype_info, flow_tag,
opcode, &elts[pos]);
if (unlikely(rxq->shared)) {
- elts[pos]->port = container_of(p0, struct mlx5_cqe,
+ pkts[pos]->port = container_of(p0, struct mlx5_cqe,
pkt_info)->user_index_low;
- elts[pos + 1]->port = container_of(p1, struct mlx5_cqe,
+ pkts[pos + 1]->port = container_of(p1, struct mlx5_cqe,
pkt_info)->user_index_low;
- elts[pos + 2]->port = container_of(p2, struct mlx5_cqe,
+ pkts[pos + 2]->port = container_of(p2, struct mlx5_cqe,
pkt_info)->user_index_low;
- elts[pos + 3]->port = container_of(p3, struct mlx5_cqe,
+ pkts[pos + 3]->port = container_of(p3, struct mlx5_cqe,
pkt_info)->user_index_low;
}
if (unlikely(rxq->hw_timestamp)) {
@@ -853,34 +853,34 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
ts = rte_be_to_cpu_64
(container_of(p0, struct mlx5_cqe,
pkt_info)->timestamp);
- mlx5_timestamp_set(elts[pos], offset,
+ mlx5_timestamp_set(pkts[pos], offset,
mlx5_txpp_convert_rx_ts(sh, ts));
ts = rte_be_to_cpu_64
(container_of(p1, struct mlx5_cqe,
pkt_info)->timestamp);
- mlx5_timestamp_set(elts[pos + 1], offset,
+ mlx5_timestamp_set(pkts[pos + 1], offset,
mlx5_txpp_convert_rx_ts(sh, ts));
ts = rte_be_to_cpu_64
(container_of(p2, struct mlx5_cqe,
pkt_info)->timestamp);
- mlx5_timestamp_set(elts[pos + 2], offset,
+ mlx5_timestamp_set(pkts[pos + 2], offset,
mlx5_txpp_convert_rx_ts(sh, ts));
ts = rte_be_to_cpu_64
(container_of(p3, struct mlx5_cqe,
pkt_info)->timestamp);
- mlx5_timestamp_set(elts[pos + 3], offset,
+ mlx5_timestamp_set(pkts[pos + 3], offset,
mlx5_txpp_convert_rx_ts(sh, ts));
} else {
- mlx5_timestamp_set(elts[pos], offset,
+ mlx5_timestamp_set(pkts[pos], offset,
rte_be_to_cpu_64(container_of(p0,
struct mlx5_cqe, pkt_info)->timestamp));
- mlx5_timestamp_set(elts[pos + 1], offset,
+ mlx5_timestamp_set(pkts[pos + 1], offset,
rte_be_to_cpu_64(container_of(p1,
struct mlx5_cqe, pkt_info)->timestamp));
- mlx5_timestamp_set(elts[pos + 2], offset,
+ mlx5_timestamp_set(pkts[pos + 2], offset,
rte_be_to_cpu_64(container_of(p2,
struct mlx5_cqe, pkt_info)->timestamp));
- mlx5_timestamp_set(elts[pos + 3], offset,
+ mlx5_timestamp_set(pkts[pos + 3], offset,
rte_be_to_cpu_64(container_of(p3,
struct mlx5_cqe, pkt_info)->timestamp));
}
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index 2bdd1f676d..fb59c11346 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -783,9 +783,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
rxq_cq_to_ptype_oflags_v(rxq, cqes, opcode, &pkts[pos]);
if (unlikely(rxq->shared)) {
pkts[pos]->port = cq[pos].user_index_low;
- pkts[pos + p1]->port = cq[pos + p1].user_index_low;
- pkts[pos + p2]->port = cq[pos + p2].user_index_low;
- pkts[pos + p3]->port = cq[pos + p3].user_index_low;
+ pkts[pos + 1]->port = cq[pos + p1].user_index_low;
+ pkts[pos + 2]->port = cq[pos + p2].user_index_low;
+ pkts[pos + 3]->port = cq[pos + p3].user_index_low;
}
if (unlikely(rxq->hw_timestamp)) {
int offset = rxq->timestamp_offset;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.325119501 +0800
+++ 0068-net-mlx5-fix-shared-queue-port-number-in-vector-Rx.patch 2024-12-06 23:26:44.003044827 +0800
@@ -1 +1 @@
-From 3638f431b9ff39003e31c3a761d407e04b25576a Mon Sep 17 00:00:00 2001
+From 200ea61fba771b8fe4a8ae6f2abf9827a427f8f6 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3638f431b9ff39003e31c3a761d407e04b25576a ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index b2bbc4ba17..ca614ecf9d 100644
+index cccfa7f2d3..f6e74f4180 100644
@@ -24 +26 @@
-@@ -1251,9 +1251,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -1249,9 +1249,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
@@ -37 +39 @@
-@@ -1297,17 +1297,17 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -1295,17 +1295,17 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
@@ -59 +61 @@
-index 0ce9827ed9..519fff5b2c 100644
+index 3ed688191f..942d395dc9 100644
@@ -62 +64 @@
-@@ -837,13 +837,13 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -835,13 +835,13 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
@@ -80 +82 @@
-@@ -855,34 +855,34 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -853,34 +853,34 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
@@ -124 +126 @@
-index e71d6c303f..0a2b67e750 100644
+index 2bdd1f676d..fb59c11346 100644
@@ -127 +129 @@
-@@ -785,9 +785,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -783,9 +783,9 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5/hws: fix allocation of STCs' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (67 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix shared queue port number in vector Rx' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix counter query loop getting stuck' " Xueming Li
` (27 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Erez Shitrit; +Cc: Xueming Li, Bing Zhao, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=006ec58ed75610a681ad97cd5364d28e6f5c908c
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 006ec58ed75610a681ad97cd5364d28e6f5c908c Mon Sep 17 00:00:00 2001
From: Erez Shitrit <erezsh@nvidia.com>
Date: Tue, 29 Oct 2024 15:24:03 +0200
Subject: [PATCH] net/mlx5/hws: fix allocation of STCs
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 691326d15da263d068de71c468c74c225c4f75c3 ]
STC is a limited resource of the HW, and might get consumed till no more
contexts can be opened.
So, let the user to define the size of how many STCs to allocate per
context.
In case the user has many representors, no need to allocate per each of
them the default value of STCs, otherwise after a certain numbers of
representors no more STC's will remain in the system.
Fixes: b0290e56dd08 ("net/mlx5/hws: add context object")
Signed-off-by: Erez Shitrit <erezsh@nvidia.com>
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/hws/mlx5dr.h | 4 +++-
drivers/net/mlx5/hws/mlx5dr_context.c | 9 ++++++---
drivers/net/mlx5/mlx5_flow.h | 3 +++
drivers/net/mlx5/mlx5_flow_hw.c | 3 +++
4 files changed, 15 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/hws/mlx5dr.h b/drivers/net/mlx5/hws/mlx5dr.h
index f003d9f446..cbb79b8ba1 100644
--- a/drivers/net/mlx5/hws/mlx5dr.h
+++ b/drivers/net/mlx5/hws/mlx5dr.h
@@ -96,8 +96,10 @@ struct mlx5dr_context_attr {
uint16_t queues;
uint16_t queue_size;
size_t initial_log_ste_memory; /* Currently not in use */
- /* Optional PD used for allocating res ources */
+ /* Optional PD used for allocating resources */
struct ibv_pd *pd;
+ /* Optional the STC array size for that context */
+ size_t initial_log_stc_memory;
/* Optional other ctx for resources allocation, all objects will be created on it */
struct ibv_context *shared_ibv_ctx;
};
diff --git a/drivers/net/mlx5/hws/mlx5dr_context.c b/drivers/net/mlx5/hws/mlx5dr_context.c
index 7f120b3b1b..6c4c18b041 100644
--- a/drivers/net/mlx5/hws/mlx5dr_context.c
+++ b/drivers/net/mlx5/hws/mlx5dr_context.c
@@ -19,7 +19,8 @@ uint8_t mlx5dr_context_get_reparse_mode(struct mlx5dr_context *ctx)
return MLX5_IFC_RTC_REPARSE_ALWAYS;
}
-static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx)
+static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx,
+ struct mlx5dr_context_attr *attr)
{
struct mlx5dr_pool_attr pool_attr = {0};
uint8_t max_log_sz;
@@ -34,7 +35,9 @@ static int mlx5dr_context_pools_init(struct mlx5dr_context *ctx)
/* Create an STC pool per FT type */
pool_attr.pool_type = MLX5DR_POOL_TYPE_STC;
pool_attr.flags = MLX5DR_POOL_FLAGS_FOR_STC_POOL;
- max_log_sz = RTE_MIN(MLX5DR_POOL_STC_LOG_SZ, ctx->caps->stc_alloc_log_max);
+ if (!attr->initial_log_stc_memory)
+ attr->initial_log_stc_memory = MLX5DR_POOL_STC_LOG_SZ;
+ max_log_sz = RTE_MIN(attr->initial_log_stc_memory, ctx->caps->stc_alloc_log_max);
pool_attr.alloc_log_sz = RTE_MAX(max_log_sz, ctx->caps->stc_alloc_log_gran);
for (i = 0; i < MLX5DR_TABLE_TYPE_MAX; i++) {
@@ -172,7 +175,7 @@ static int mlx5dr_context_init_hws(struct mlx5dr_context *ctx,
if (ret)
return ret;
- ret = mlx5dr_context_pools_init(ctx);
+ ret = mlx5dr_context_pools_init(ctx, attr);
if (ret)
goto uninit_pd;
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index e6b0c1bb80..afb3c3b72f 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -3011,6 +3011,9 @@ flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx)
void
mlx5_indirect_list_handles_release(struct rte_eth_dev *dev);
#ifdef HAVE_MLX5_HWS_SUPPORT
+
+#define MLX5_REPR_STC_MEMORY_LOG 11
+
struct mlx5_mirror;
void
mlx5_hw_mirror_destroy(struct rte_eth_dev *dev, struct mlx5_mirror *mirror);
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 0eaf38537a..2437e44051 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -9570,6 +9570,9 @@ flow_hw_configure(struct rte_eth_dev *dev,
}
dr_ctx_attr.pd = priv->sh->cdev->pd;
dr_ctx_attr.queues = nb_q_updated;
+ /* Assign initial value of STC numbers for representors. */
+ if (priv->representor)
+ dr_ctx_attr.initial_log_stc_memory = MLX5_REPR_STC_MEMORY_LOG;
/* Queue size should all be the same. Take the first one. */
dr_ctx_attr.queue_size = _queue_attr[0]->size;
if (port_attr->flags & RTE_FLOW_PORT_FLAG_SHARE_INDIRECT) {
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.356378201 +0800
+++ 0069-net-mlx5-hws-fix-allocation-of-STCs.patch 2024-12-06 23:26:44.013044827 +0800
@@ -1 +1 @@
-From 691326d15da263d068de71c468c74c225c4f75c3 Mon Sep 17 00:00:00 2001
+From 006ec58ed75610a681ad97cd5364d28e6f5c908c Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 691326d15da263d068de71c468c74c225c4f75c3 ]
@@ -15 +17,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 1b58eeb2c7..3668ab9fcf 100644
+index f003d9f446..cbb79b8ba1 100644
@@ -31 +33 @@
-@@ -103,8 +103,10 @@ struct mlx5dr_context_attr {
+@@ -96,8 +96,10 @@ struct mlx5dr_context_attr {
@@ -42 +44 @@
- bool bwc; /* add support for backward compatible API*/
+ };
@@ -44 +46 @@
-index db5e72927a..24741afe58 100644
+index 7f120b3b1b..6c4c18b041 100644
@@ -78 +80 @@
-index 693e07218d..d871b62854 100644
+index e6b0c1bb80..afb3c3b72f 100644
@@ -81 +83 @@
-@@ -3652,6 +3652,9 @@ flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx)
+@@ -3011,6 +3011,9 @@ flow_hw_get_ipv6_route_ext_mod_id_from_ctx(void *dr_ctx, uint8_t idx)
@@ -92 +94 @@
-index 2a9ef71cd8..c43520ed51 100644
+index 0eaf38537a..2437e44051 100644
@@ -95 +97 @@
-@@ -11825,6 +11825,9 @@ __flow_hw_configure(struct rte_eth_dev *dev,
+@@ -9570,6 +9570,9 @@ flow_hw_configure(struct rte_eth_dev *dev,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix counter query loop getting stuck' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (68 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5/hws: fix allocation of STCs' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix Rx queue control management' " Xueming Li
` (26 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Dariusz Sosnowski; +Cc: Xueming Li, Ori Kam, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=833fd897b154ab2f3d278426453a7f1514635a66
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 833fd897b154ab2f3d278426453a7f1514635a66 Mon Sep 17 00:00:00 2001
From: Dariusz Sosnowski <dsosnowski@nvidia.com>
Date: Wed, 30 Oct 2024 17:30:46 +0100
Subject: [PATCH] net/mlx5: fix counter query loop getting stuck
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c0e29968294c92ca15fdb34ce63fbba01c4562a6 ]
Counter service thread, responsible for refreshing counter values
stored in host memory, is running an "infinite loop" with the following
logic:
- For each port:
- Refresh port's counter pool - call to __mlx5_hws_cnt_svc().
- Perform aging checks.
- Go to sleep if time left in current cycle.
- Repeat.
__mlx5_hws_cnt_svc() used to perform counter value refresh
implemented the following logic:
1. Store number of counters waiting for reset.
2. Issue ASO WQEs to refresh all counters values.
3. Move counters from reset to reuse list.
Number of moved counters is limited by number stored in step 1 or
step 4.
4. Store number of counters waiting for reset.
5. If number of counters waiting for reset > 0, go to step 2.
Now, if an application constantly creates/destroys flow rules with
counters and even a single counter is added to reset list during step 2,
counter service thread might end up issuing ASO WQEs endlessly,
without going to sleep and respecting the configured cycle time.
This patch fixes that by remove the loop inside __mlx5_hws_cnt_svc().
As a drawback of this fix, the application must allocate enough counters
to accommodate for the cycle time. This number if roughly equal to the
expected counter release rate.
This patch also:
- Ensures that proper counter related error code is returned,
when flow rule create failed due to counter allocation problem.
- Adds debug logging to counter service thread.
- Adds documentation for counter service thread.
Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
doc/guides/nics/mlx5.rst | 71 +++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow_hw.c | 17 +++++---
drivers/net/mlx5/mlx5_hws_cnt.c | 46 ++++++++++++---------
3 files changed, 110 insertions(+), 24 deletions(-)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 2c59b24d78..d1c3284ca1 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -1874,6 +1874,77 @@ directly but neither destroyed nor flushed.
The application should re-create the flows as required after the port restart.
+Notes for flow counters
+-----------------------
+
+mlx5 PMD supports the ``COUNT`` flow action,
+which provides an ability to count packets (and bytes)
+matched against a given flow rule.
+This section describes the high level overview of
+how this support is implemented and limitations.
+
+HW steering flow engine
+~~~~~~~~~~~~~~~~~~~~~~~
+
+Flow counters are allocated from HW in bulks.
+A set of bulks forms a flow counter pool managed by PMD.
+When flow counters are queried from HW,
+each counter is identified by an offset in a given bulk.
+Querying HW flow counter requires sending a request to HW,
+which will request a read of counter values for given offsets.
+HW will asynchronously provide these values through a DMA write.
+
+In order to optimize HW to SW communication,
+these requests are handled in a separate counter service thread
+spawned by mlx5 PMD.
+This service thread will refresh the counter values stored in memory,
+in cycles, each spanning ``svc_cycle_time`` milliseconds.
+By default, ``svc_cycle_time`` is set to 500.
+When applications query the ``COUNT`` flow action,
+PMD returns the values stored in host memory.
+
+mlx5 PMD manages 3 global rings of allocated counter offsets:
+
+- ``free`` ring - Counters which were not used at all.
+- ``wait_reset`` ring - Counters which were used in some flow rules,
+ but were recently freed (flow rule was destroyed
+ or an indirect action was destroyed).
+ Since the count value might have changed
+ between the last counter service thread cycle and the moment it was freed,
+ the value in host memory might be stale.
+ During the next service thread cycle,
+ such counters will be moved to ``reuse`` ring.
+- ``reuse`` ring - Counters which were used at least once
+ and can be reused in new flow rules.
+
+When counters are assigned to a flow rule (or allocated to indirect action),
+the PMD first tries to fetch a counter from ``reuse`` ring.
+If it's empty, the PMD fetches a counter from ``free`` ring.
+
+The counter service thread works as follows:
+
+#. Record counters stored in ``wait_reset`` ring.
+#. Read values of all counters which were used at least once
+ or are currently in use.
+#. Move recorded counters from ``wait_reset`` to ``reuse`` ring.
+#. Sleep for ``(query time) - svc_cycle_time`` milliseconds
+#. Repeat.
+
+Because freeing a counter (by destroying a flow rule or destroying indirect action)
+does not immediately make it available for the application,
+the PMD might return:
+
+- ``ENOENT`` if no counter is available in ``free``, ``reuse``
+ or ``wait_reset`` rings.
+ No counter will be available until the application releases some of them.
+- ``EAGAIN`` if no counter is available in ``free`` and ``reuse`` rings,
+ but there are counters in ``wait_reset`` ring.
+ This means that after the next service thread cycle new counters will be available.
+
+The application has to be aware that flow rule create or indirect action create
+might need be retried.
+
+
Notes for hairpin
-----------------
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 2437e44051..31e11763db 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3186,8 +3186,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
case RTE_FLOW_ACTION_TYPE_COUNT:
cnt_queue = mlx5_hws_cnt_get_queue(priv, &queue);
ret = mlx5_hws_cnt_pool_get(priv->hws_cpool, cnt_queue, &cnt_id, age_idx);
- if (ret != 0)
+ if (ret != 0) {
+ rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_ACTION,
+ action, "Failed to allocate flow counter");
goto error;
+ }
ret = mlx5_hws_cnt_pool_get_action_offset
(priv->hws_cpool,
cnt_id,
@@ -3381,6 +3384,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
struct rte_flow_hw *flow = NULL;
struct mlx5_hw_q_job *job = NULL;
const struct rte_flow_item *rule_items;
+ struct rte_flow_error sub_error = { 0 };
uint32_t flow_idx = 0;
uint32_t res_idx = 0;
int ret;
@@ -3429,7 +3433,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
if (flow_hw_actions_construct(dev, job,
&table->ats[action_template_index],
pattern_template_index, actions,
- rule_acts, queue, error))
+ rule_acts, queue, &sub_error))
goto error;
rule_items = flow_hw_get_rule_items(dev, table, items,
pattern_template_index, job);
@@ -3448,9 +3452,12 @@ error:
mlx5_ipool_free(table->flow, flow_idx);
if (res_idx)
mlx5_ipool_free(table->resource, res_idx);
- rte_flow_error_set(error, rte_errno,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "fail to create rte flow");
+ if (sub_error.cause != RTE_FLOW_ERROR_TYPE_NONE && error != NULL)
+ *error = sub_error;
+ else
+ rte_flow_error_set(error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "fail to create rte flow");
return NULL;
}
diff --git a/drivers/net/mlx5/mlx5_hws_cnt.c b/drivers/net/mlx5/mlx5_hws_cnt.c
index 41edd19bb8..7a88a4001a 100644
--- a/drivers/net/mlx5/mlx5_hws_cnt.c
+++ b/drivers/net/mlx5/mlx5_hws_cnt.c
@@ -56,26 +56,29 @@ __mlx5_hws_cnt_svc(struct mlx5_dev_ctx_shared *sh,
uint32_t ret __rte_unused;
reset_cnt_num = rte_ring_count(reset_list);
- do {
- cpool->query_gen++;
- mlx5_aso_cnt_query(sh, cpool);
- zcdr.n1 = 0;
- zcdu.n1 = 0;
- ret = rte_ring_enqueue_zc_burst_elem_start(reuse_list,
- sizeof(cnt_id_t),
- reset_cnt_num, &zcdu,
- NULL);
- MLX5_ASSERT(ret == reset_cnt_num);
- ret = rte_ring_dequeue_zc_burst_elem_start(reset_list,
- sizeof(cnt_id_t),
- reset_cnt_num, &zcdr,
- NULL);
- MLX5_ASSERT(ret == reset_cnt_num);
- __hws_cnt_r2rcpy(&zcdu, &zcdr, reset_cnt_num);
- rte_ring_dequeue_zc_elem_finish(reset_list, reset_cnt_num);
- rte_ring_enqueue_zc_elem_finish(reuse_list, reset_cnt_num);
+ cpool->query_gen++;
+ mlx5_aso_cnt_query(sh, cpool);
+ zcdr.n1 = 0;
+ zcdu.n1 = 0;
+ ret = rte_ring_enqueue_zc_burst_elem_start(reuse_list,
+ sizeof(cnt_id_t),
+ reset_cnt_num, &zcdu,
+ NULL);
+ MLX5_ASSERT(ret == reset_cnt_num);
+ ret = rte_ring_dequeue_zc_burst_elem_start(reset_list,
+ sizeof(cnt_id_t),
+ reset_cnt_num, &zcdr,
+ NULL);
+ MLX5_ASSERT(ret == reset_cnt_num);
+ __hws_cnt_r2rcpy(&zcdu, &zcdr, reset_cnt_num);
+ rte_ring_dequeue_zc_elem_finish(reset_list, reset_cnt_num);
+ rte_ring_enqueue_zc_elem_finish(reuse_list, reset_cnt_num);
+
+ if (rte_log_can_log(mlx5_logtype, RTE_LOG_DEBUG)) {
reset_cnt_num = rte_ring_count(reset_list);
- } while (reset_cnt_num > 0);
+ DRV_LOG(DEBUG, "ibdev %s cpool %p wait_reset_cnt=%" PRIu32,
+ sh->ibdev_name, (void *)cpool, reset_cnt_num);
+ }
}
/**
@@ -315,6 +318,11 @@ mlx5_hws_cnt_svc(void *opaque)
rte_spinlock_unlock(&sh->cpool_lock);
query_us = query_cycle / (rte_get_timer_hz() / US_PER_S);
sleep_us = interval - query_us;
+ DRV_LOG(DEBUG, "ibdev %s counter service thread: "
+ "interval_us=%" PRIu64 " query_us=%" PRIu64 " "
+ "sleep_us=%" PRIu64,
+ sh->ibdev_name, interval, query_us,
+ interval > query_us ? sleep_us : 0);
if (interval > query_us)
rte_delay_us_sleep(sleep_us);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.403270900 +0800
+++ 0070-net-mlx5-fix-counter-query-loop-getting-stuck.patch 2024-12-06 23:26:44.023044827 +0800
@@ -1 +1 @@
-From c0e29968294c92ca15fdb34ce63fbba01c4562a6 Mon Sep 17 00:00:00 2001
+From 833fd897b154ab2f3d278426453a7f1514635a66 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c0e29968294c92ca15fdb34ce63fbba01c4562a6 ]
@@ -46 +48,0 @@
-Cc: stable@dpdk.org
@@ -57 +59 @@
-index b1d6863f36..145c01fbda 100644
+index 2c59b24d78..d1c3284ca1 100644
@@ -60 +62 @@
-@@ -2021,6 +2021,77 @@ directly but neither destroyed nor flushed.
+@@ -1874,6 +1874,77 @@ directly but neither destroyed nor flushed.
@@ -139 +141 @@
-index 488ef4ce3c..6ad98d40f7 100644
+index 2437e44051..31e11763db 100644
@@ -142 +144 @@
-@@ -3734,8 +3734,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
+@@ -3186,8 +3186,11 @@ flow_hw_actions_construct(struct rte_eth_dev *dev,
@@ -155,2 +157 @@
-@@ -3980,6 +3983,7 @@ flow_hw_async_flow_create_generic(struct rte_eth_dev *dev,
- struct mlx5dr_rule_action *rule_acts;
+@@ -3381,6 +3384,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
@@ -157,0 +159 @@
+ struct mlx5_hw_q_job *job = NULL;
@@ -163 +165,2 @@
-@@ -4037,7 +4041,7 @@ flow_hw_async_flow_create_generic(struct rte_eth_dev *dev,
+@@ -3429,7 +3433,7 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
+ if (flow_hw_actions_construct(dev, job,
@@ -165,2 +168 @@
- table->its[pattern_template_index]->item_flags,
- flow->table, actions,
+ pattern_template_index, actions,
@@ -171,4 +173,2 @@
- pattern_template_index, &priv->hw_q[queue].pp);
-@@ -4074,9 +4078,12 @@ error:
- mlx5_ipool_free(table->resource, res_idx);
- if (flow_idx)
+ pattern_template_index, job);
+@@ -3448,9 +3452,12 @@ error:
@@ -175,0 +176,2 @@
+ if (res_idx)
+ mlx5_ipool_free(table->resource, res_idx);
@@ -189 +191 @@
-index def0b19deb..0197c098f6 100644
+index 41edd19bb8..7a88a4001a 100644
@@ -241 +243 @@
-@@ -325,6 +328,11 @@ mlx5_hws_cnt_svc(void *opaque)
+@@ -315,6 +318,11 @@ mlx5_hws_cnt_svc(void *opaque)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix Rx queue control management' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (69 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix counter query loop getting stuck' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'common/mlx5: fix misalignment' " Xueming Li
` (25 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Bing Zhao
Cc: Xueming Li, Viacheslav Ovsiienko, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=0c7dbcd34ae74fdc5a51525690f5ba70ad6edc25
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 0c7dbcd34ae74fdc5a51525690f5ba70ad6edc25 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Mon, 4 Nov 2024 18:15:41 +0200
Subject: [PATCH] net/mlx5: fix Rx queue control management
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 3c9a82fa6edc06c1d4dc6c0ac53609002c4d9462 ]
With the shared Rx queue feature introduced, the control and private
Rx queue structures are decoupled, each control structure can be
shared for multiple queue for all representors inside a domain.
So it should be only managed by the shared context instead of any
private data of each device. The previous workaround is using a flag
to check the owner (allocator) of the structure and handle it only
on that device closing stage.
A proper formal solution is to add a reference count for each control
structure and only free the structure when there is no reference to
it to get rid of the UAF issue.
Fixes: f957ac996435 ("net/mlx5: workaround list management of Rx queue control")
Fixes: bcc220cb57d7 ("net/mlx5: fix shared Rx queue list management")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 1 -
drivers/net/mlx5/mlx5_flow.c | 4 ++--
drivers/net/mlx5/mlx5_rx.h | 3 +--
drivers/net/mlx5/mlx5_rxq.c | 20 ++++++++++----------
4 files changed, 13 insertions(+), 15 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index bce1d9e749..9a6bd976c2 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1886,7 +1886,6 @@ struct mlx5_priv {
uint32_t ctrl_flows; /* Control flow rules. */
rte_spinlock_t flow_list_lock;
struct mlx5_obj_ops obj_ops; /* HW objects operations. */
- LIST_HEAD(rxq, mlx5_rxq_ctrl) rxqsctrl; /* DPDK Rx queues. */
LIST_HEAD(rxqobj, mlx5_rxq_obj) rxqsobj; /* Verbs/DevX Rx queues. */
struct mlx5_list *hrxqs; /* Hash Rx queues. */
LIST_HEAD(txq, mlx5_txq_ctrl) txqsctrl; /* DPDK Tx queues. */
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index fdc7c3ea54..6286eef010 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1748,13 +1748,13 @@ flow_rxq_mark_flag_set(struct rte_eth_dev *dev)
opriv->domain_id != priv->domain_id ||
opriv->mark_enabled)
continue;
- LIST_FOREACH(rxq_ctrl, &opriv->rxqsctrl, next) {
+ LIST_FOREACH(rxq_ctrl, &opriv->sh->shared_rxqs, share_entry) {
rxq_ctrl->rxq.mark = 1;
}
opriv->mark_enabled = 1;
}
} else {
- LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) {
+ LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) {
rxq_ctrl->rxq.mark = 1;
}
priv->mark_enabled = 1;
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index 08ab0a042d..f78fae26d3 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -151,13 +151,13 @@ struct mlx5_rxq_data {
/* RX queue control descriptor. */
struct mlx5_rxq_ctrl {
struct mlx5_rxq_data rxq; /* Data path structure. */
- LIST_ENTRY(mlx5_rxq_ctrl) next; /* Pointer to the next element. */
LIST_HEAD(priv, mlx5_rxq_priv) owners; /* Owner rxq list. */
struct mlx5_rxq_obj *obj; /* Verbs/DevX elements. */
struct mlx5_dev_ctx_shared *sh; /* Shared context. */
bool is_hairpin; /* Whether RxQ type is Hairpin. */
unsigned int socket; /* CPU socket ID for allocations. */
LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
+ RTE_ATOMIC(uint32_t) ctrl_ref; /* Reference counter. */
uint32_t share_group; /* Group ID of shared RXQ. */
uint16_t share_qid; /* Shared RxQ ID in group. */
unsigned int started:1; /* Whether (shared) RXQ has been started. */
@@ -173,7 +173,6 @@ struct mlx5_rxq_ctrl {
/* RX queue private data. */
struct mlx5_rxq_priv {
uint16_t idx; /* Queue index. */
- bool possessor; /* Shared rxq_ctrl allocated for the 1st time. */
uint32_t refcnt; /* Reference counter. */
struct mlx5_rxq_ctrl *ctrl; /* Shared Rx Queue. */
LIST_ENTRY(mlx5_rxq_priv) owner_entry; /* Entry in shared rxq_ctrl. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 82d8f29b31..aa2e8fd9e3 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -946,7 +946,6 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
rte_errno = ENOMEM;
return -rte_errno;
}
- rxq->possessor = true;
}
rxq->priv = priv;
rxq->idx = idx;
@@ -954,6 +953,7 @@ mlx5_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
/* Join owner list. */
LIST_INSERT_HEAD(&rxq_ctrl->owners, rxq, owner_entry);
rxq->ctrl = rxq_ctrl;
+ rte_atomic_fetch_add_explicit(&rxq_ctrl->ctrl_ref, 1, rte_memory_order_relaxed);
mlx5_rxq_ref(dev, idx);
DRV_LOG(DEBUG, "port %u adding Rx queue %u to list",
dev->data->port_id, idx);
@@ -1971,9 +1971,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
tmpl->rxq.shared = 1;
tmpl->share_group = conf->share_group;
tmpl->share_qid = conf->share_qid;
- LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
}
- LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
+ LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
+ rte_atomic_store_explicit(&tmpl->ctrl_ref, 1, rte_memory_order_relaxed);
return tmpl;
error:
mlx5_mr_btree_free(&tmpl->rxq.mr_ctrl.cache_bh);
@@ -2025,9 +2025,9 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
tmpl->rxq.mr_ctrl.cache_bh = (struct mlx5_mr_btree) { 0 };
tmpl->rxq.idx = idx;
rxq->hairpin_conf = *hairpin_conf;
- rxq->possessor = true;
mlx5_rxq_ref(dev, idx);
- LIST_INSERT_HEAD(&priv->rxqsctrl, tmpl, next);
+ LIST_INSERT_HEAD(&priv->sh->shared_rxqs, tmpl, share_entry);
+ rte_atomic_store_explicit(&tmpl->ctrl_ref, 1, rte_memory_order_relaxed);
return tmpl;
}
@@ -2293,16 +2293,16 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
RTE_ETH_QUEUE_STATE_STOPPED;
}
} else { /* Refcnt zero, closing device. */
- if (rxq->possessor)
- LIST_REMOVE(rxq_ctrl, next);
LIST_REMOVE(rxq, owner_entry);
if (LIST_EMPTY(&rxq_ctrl->owners)) {
if (!rxq_ctrl->is_hairpin)
mlx5_mr_btree_free
(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
- if (rxq_ctrl->rxq.shared)
+ if (rte_atomic_fetch_sub_explicit(&rxq_ctrl->ctrl_ref, 1,
+ rte_memory_order_relaxed) == 1) {
LIST_REMOVE(rxq_ctrl, share_entry);
- mlx5_free(rxq_ctrl);
+ mlx5_free(rxq_ctrl);
+ }
}
dev->data->rx_queues[idx] = NULL;
mlx5_free(rxq);
@@ -2327,7 +2327,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
struct mlx5_rxq_ctrl *rxq_ctrl;
int ret = 0;
- LIST_FOREACH(rxq_ctrl, &priv->rxqsctrl, next) {
+ LIST_FOREACH(rxq_ctrl, &priv->sh->shared_rxqs, share_entry) {
DRV_LOG(DEBUG, "port %u Rx Queue %u still referenced",
dev->data->port_id, rxq_ctrl->rxq.idx);
++ret;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.451298100 +0800
+++ 0071-net-mlx5-fix-Rx-queue-control-management.patch 2024-12-06 23:26:44.043044827 +0800
@@ -1 +1 @@
-From 3c9a82fa6edc06c1d4dc6c0ac53609002c4d9462 Mon Sep 17 00:00:00 2001
+From 0c7dbcd34ae74fdc5a51525690f5ba70ad6edc25 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 3c9a82fa6edc06c1d4dc6c0ac53609002c4d9462 ]
@@ -21 +23,0 @@
-CC: stable@dpdk.org
@@ -34 +36 @@
-index bc25d72c0e..6e8295110e 100644
+index bce1d9e749..9a6bd976c2 100644
@@ -37 +39 @@
-@@ -2004,7 +2004,6 @@ struct mlx5_priv {
+@@ -1886,7 +1886,6 @@ struct mlx5_priv {
@@ -46 +48 @@
-index f8cfa661ec..d631ed150c 100644
+index fdc7c3ea54..6286eef010 100644
@@ -49 +51 @@
-@@ -1648,13 +1648,13 @@ flow_rxq_mark_flag_set(struct rte_eth_dev *dev)
+@@ -1748,13 +1748,13 @@ flow_rxq_mark_flag_set(struct rte_eth_dev *dev)
@@ -66 +68 @@
-index 9bcb43b007..da7c448948 100644
+index 08ab0a042d..f78fae26d3 100644
@@ -69 +71 @@
-@@ -151,13 +151,13 @@ struct __rte_cache_aligned mlx5_rxq_data {
+@@ -151,13 +151,13 @@ struct mlx5_rxq_data {
@@ -89 +91 @@
- RTE_ATOMIC(uint32_t) refcnt; /* Reference counter. */
+ uint32_t refcnt; /* Reference counter. */
@@ -93 +95 @@
-index 5eac224b76..d437835b73 100644
+index 82d8f29b31..aa2e8fd9e3 100644
@@ -112 +114 @@
-@@ -1970,9 +1970,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
+@@ -1971,9 +1971,9 @@ mlx5_rxq_new(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc,
@@ -124 +126 @@
-@@ -2024,9 +2024,9 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
+@@ -2025,9 +2025,9 @@ mlx5_rxq_hairpin_new(struct rte_eth_dev *dev, struct mlx5_rxq_priv *rxq,
@@ -136 +138 @@
-@@ -2292,16 +2292,16 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+@@ -2293,16 +2293,16 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
@@ -157 +159 @@
-@@ -2326,7 +2326,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
+@@ -2327,7 +2327,7 @@ mlx5_rxq_verify(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/mlx5: fix misalignment' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (70 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix Rx queue control management' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix default RSS flows creation order' " Xueming Li
` (24 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Shani Peretz; +Cc: Xueming Li, Bing Zhao, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=83af285731ae630e8db81a4d5a3d2e93f6826726
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 83af285731ae630e8db81a4d5a3d2e93f6826726 Mon Sep 17 00:00:00 2001
From: Shani Peretz <shperetz@nvidia.com>
Date: Tue, 12 Nov 2024 10:21:26 +0200
Subject: [PATCH] common/mlx5: fix misalignment
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 90967539d0d1afcfd5237ed85efdc430359a0e6b ]
ASan reported a runtime error due to misalignment
involving three structures.
The first issue arises when accessing
l_inconst->cache[MLX5_LIST_GLOBAL]->h.
If struct mlx5_list_cache is not properly aligned, the pointer gc,
assigned to l_inconst->cache[MLX5_LIST_GLOBAL], could be misaligned.
To address this, the __rte_aligned(16) attribute was added to
struct mlx5_list_inconst in struct mlx5_list, which includes struct
mlx5_list_cache, ensuring that the entire mlx5_list structure,
including mlx5_list_cache, is aligned to 64 bytes.
To resolve misalignment issues with struct mlx5_flow_handle,
The initialization of resources for the ipool ensures that
the ipool size is rounded up to the 8-byte boundary
The error in assigning values to actions[i] was due to potential
padding or misalignment in struct mlx5_modification_cmd.
To prevent such issues, the __rte_packed attribute was added to
struct mlx5_modification_cmd, ensuring that the structure is packed
without extra padding which helps avoid misaligned memory accesses.
Two performance degradation tests were conducted.
Following are the results comparing this commit to the most recent
commit in mlnx_dpdk_22.11 at that time (b69408ae453).
Before asan misalignment fix (average kflows/sec) -
Insertion - 4461.269, Deletion - 7799.9992
After:
Insertion - 4579.0642 , Deletion - 7913.0034
Fixes: 9a4c36880704 ("common/mlx5: optimize cache list object memory")
Signed-off-by: Shani Peretz <shperetz@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
---
drivers/common/mlx5/mlx5_common_utils.h | 2 +-
drivers/common/mlx5/mlx5_prm.h | 4 ++--
drivers/net/mlx5/mlx5.c | 2 +-
3 files changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_common_utils.h b/drivers/common/mlx5/mlx5_common_utils.h
index ae15119a33..6db0105c53 100644
--- a/drivers/common/mlx5/mlx5_common_utils.h
+++ b/drivers/common/mlx5/mlx5_common_utils.h
@@ -131,7 +131,7 @@ struct mlx5_list_inconst {
* For huge amount of entries, please consider hash list.
*
*/
-struct mlx5_list {
+struct __rte_aligned(16) mlx5_list {
struct mlx5_list_const l_const;
struct mlx5_list_inconst l_inconst;
};
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 0875256e36..79533ff35a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -923,7 +923,7 @@ struct mlx5_modification_cmd {
unsigned int field:12;
unsigned int action_type:4;
};
- };
+ } __rte_packed;
union {
uint32_t data1;
uint8_t data[4];
@@ -934,7 +934,7 @@ struct mlx5_modification_cmd {
unsigned int dst_field:12;
unsigned int rsvd4:4;
};
- };
+ } __rte_packed;
};
typedef uint64_t u64;
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 25182bce39..584a51b393 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -906,7 +906,7 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh)
*/
case MLX5_IPOOL_MLX5_FLOW:
cfg.size = sh->config.dv_flow_en ?
- sizeof(struct mlx5_flow_handle) :
+ RTE_ALIGN_MUL_CEIL(sizeof(struct mlx5_flow_handle), 8) :
MLX5_FLOW_HANDLE_VERBS_SIZE;
break;
#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.500897901 +0800
+++ 0072-common-mlx5-fix-misalignment.patch 2024-12-06 23:26:44.043044827 +0800
@@ -1 +1 @@
-From 90967539d0d1afcfd5237ed85efdc430359a0e6b Mon Sep 17 00:00:00 2001
+From 83af285731ae630e8db81a4d5a3d2e93f6826726 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 90967539d0d1afcfd5237ed85efdc430359a0e6b ]
@@ -38 +40,0 @@
-Cc: stable@dpdk.org
@@ -49 +51 @@
-index c5eff7a0bf..9139bc6829 100644
+index ae15119a33..6db0105c53 100644
@@ -62 +64 @@
-index 210158350d..2d82807bc2 100644
+index 0875256e36..79533ff35a 100644
@@ -65 +67 @@
-@@ -941,7 +941,7 @@ struct mlx5_modification_cmd {
+@@ -923,7 +923,7 @@ struct mlx5_modification_cmd {
@@ -74 +76 @@
-@@ -952,7 +952,7 @@ struct mlx5_modification_cmd {
+@@ -934,7 +934,7 @@ struct mlx5_modification_cmd {
@@ -84 +86 @@
-index 52b90e6ff3..6e4473e2f4 100644
+index 25182bce39..584a51b393 100644
@@ -87 +89 @@
-@@ -907,7 +907,7 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh)
+@@ -906,7 +906,7 @@ mlx5_flow_ipool_create(struct mlx5_dev_ctx_shared *sh)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix default RSS flows creation order' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (71 preceding siblings ...)
2024-12-07 8:00 ` patch 'common/mlx5: fix misalignment' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix Rx queue reference count in flushing flows' " Xueming Li
` (23 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Bing Zhao; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=8db2e5cda6bfbea1ff3c75d4855c4488c2d0dc74
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 8db2e5cda6bfbea1ff3c75d4855c4488c2d0dc74 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Wed, 13 Nov 2024 09:19:52 +0200
Subject: [PATCH] net/mlx5: fix default RSS flows creation order
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 9a66bb734e1311bcc2bf3b286f7ab6d28975c5c7 ]
In both SWS and HWS mode, default ingress RSS flows are always
created via the driver on the root table. In the current driver,
the first created flow rules will be matched firstly when:
1. >= 2 rules can be matched on the root table.
2. the rules have the same priority.
All MC / BC flow rules would have the same priority and discard
the input priority from the user space in the driver. All rules have
a fixed priority 32 when the Ethernet destination MAC is a MC or BC
address.
In SWS non-template API, all the device rules are added into the list
and applied in a reverse order.
This patch syncs default flow rule creation order between SWS and HWS.
The order should be:
1. IPv4(6) + TCP/UDP, if required.
2. IPv4(6) only, if required.
3. None IP traffic.
Fixes: 9fa7c1cddb85 ("net/mlx5: create control flow rules with HWS")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.h | 8 ++++----
drivers/net/mlx5/mlx5_flow_hw.c | 2 +-
2 files changed, 5 insertions(+), 5 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index afb3c3b72f..01f0eab1fa 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2443,13 +2443,13 @@ enum mlx5_flow_ctrl_rx_eth_pattern_type {
/* All types of RSS actions used in control flow rules. */
enum mlx5_flow_ctrl_rx_expanded_rss_type {
- MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_NON_IP = 0,
- MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4,
+ MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP = 0,
+ MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP,
MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_UDP,
MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4_TCP,
MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6,
- MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_UDP,
- MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV6_TCP,
+ MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_IPV4,
+ MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_NON_IP,
MLX5_FLOW_HW_CTRL_RX_EXPANDED_RSS_MAX,
};
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index 31e11763db..3dc26d5a0b 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -12812,7 +12812,7 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags)
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flow_hw_ctrl_rx *hw_ctrl_rx;
unsigned int i;
- unsigned int j;
+ int j;
int ret = 0;
RTE_SET_USED(priv);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.539518101 +0800
+++ 0073-net-mlx5-fix-default-RSS-flows-creation-order.patch 2024-12-06 23:26:44.063044827 +0800
@@ -1 +1 @@
-From 9a66bb734e1311bcc2bf3b286f7ab6d28975c5c7 Mon Sep 17 00:00:00 2001
+From 8db2e5cda6bfbea1ff3c75d4855c4488c2d0dc74 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 9a66bb734e1311bcc2bf3b286f7ab6d28975c5c7 ]
@@ -27 +29,0 @@
-Cc: stable@dpdk.org
@@ -37 +39 @@
-index a56e8be97e..bcc2782460 100644
+index afb3c3b72f..01f0eab1fa 100644
@@ -40 +42 @@
-@@ -2916,13 +2916,13 @@ enum mlx5_flow_ctrl_rx_eth_pattern_type {
+@@ -2443,13 +2443,13 @@ enum mlx5_flow_ctrl_rx_eth_pattern_type {
@@ -59 +61 @@
-index 6ad98d40f7..50dbaa27ab 100644
+index 31e11763db..3dc26d5a0b 100644
@@ -62 +64 @@
-@@ -16164,7 +16164,7 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags)
+@@ -12812,7 +12812,7 @@ mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix Rx queue reference count in flushing flows' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (72 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix default RSS flows creation order' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix miniCQEs number calculation' " Xueming Li
` (22 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Bing Zhao; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=95cad5da69983fed51d4ff6b2636bface18c53c8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 95cad5da69983fed51d4ff6b2636bface18c53c8 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Wed, 13 Nov 2024 09:22:44 +0200
Subject: [PATCH] net/mlx5: fix Rx queue reference count in flushing flows
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 1ea333d2de220d5bad600ed50b43f91f7703c123 ]
Some indirect table and hrxq is created in the rule creation with
QUEUE or RSS action. When stopping a port, the 'dev_started' is set
to 0 in the beginning. The mlx5_ind_table_obj_release() should still
do the dereference of the queue(s) when it is called in the polling
of flow rule deletion, due to the fact that a flow with Q/RSS action
is always referring to the active Rx queues.
The callback now can only pass one input parameter. Using a global
flag per device to indicate that the user flows flushing is in
progress. Then the reference count of the queue(s) should be
decreased.
Fixes: 3a2f674b6aa8 ("net/mlx5: add queue and RSS HW steering action")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow.c | 2 ++
drivers/net/mlx5/mlx5_rxq.c | 8 +++++---
3 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9a6bd976c2..55c29e31a2 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1938,6 +1938,7 @@ struct mlx5_priv {
uint32_t hws_mark_refcnt; /* HWS mark action reference counter. */
struct rte_pmd_mlx5_flow_engine_mode_info mode_info; /* Process set flow engine info. */
struct mlx5_flow_hw_attr *hw_attr; /* HW Steering port configuration. */
+ bool hws_rule_flushing; /**< Whether this port is in rules flushing stage. */
#if defined(HAVE_IBV_FLOW_DV_SUPPORT) || !defined(HAVE_INFINIBAND_VERBS_H)
/* Item template list. */
LIST_HEAD(flow_hw_itt, rte_flow_pattern_template) flow_hw_itt;
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index 6286eef010..1e9484f372 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -8101,7 +8101,9 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type,
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
if (priv->sh->config.dv_flow_en == 2 &&
type == MLX5_FLOW_TYPE_GEN) {
+ priv->hws_rule_flushing = true;
flow_hw_q_flow_flush(dev, NULL);
+ priv->hws_rule_flushing = false;
return;
}
#endif
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index aa2e8fd9e3..6d28bcb57c 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2895,6 +2895,7 @@ static void
__mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
{
struct mlx5_priv *priv = dev->data->dev_private;
+ bool deref_rxqs = true;
#ifdef HAVE_IBV_FLOW_DV_SUPPORT
if (hrxq->hws_flags)
@@ -2904,9 +2905,10 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
#endif
priv->obj_ops.hrxq_destroy(hrxq);
if (!hrxq->standalone) {
- mlx5_ind_table_obj_release(dev, hrxq->ind_table,
- hrxq->hws_flags ?
- (!!dev->data->dev_started) : true);
+ if (!dev->data->dev_started && hrxq->hws_flags &&
+ !priv->hws_rule_flushing)
+ deref_rxqs = false;
+ mlx5_ind_table_obj_release(dev, hrxq->ind_table, deref_rxqs);
}
mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx);
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.579902802 +0800
+++ 0074-net-mlx5-fix-Rx-queue-reference-count-in-flushing-fl.patch 2024-12-06 23:26:44.073044826 +0800
@@ -1 +1 @@
-From 1ea333d2de220d5bad600ed50b43f91f7703c123 Mon Sep 17 00:00:00 2001
+From 95cad5da69983fed51d4ff6b2636bface18c53c8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 1ea333d2de220d5bad600ed50b43f91f7703c123 ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
- drivers/net/mlx5/mlx5_flow.c | 3 +++
+ drivers/net/mlx5/mlx5_flow.c | 2 ++
@@ -27 +29 @@
- 3 files changed, 9 insertions(+), 3 deletions(-)
+ 3 files changed, 8 insertions(+), 3 deletions(-)
@@ -30 +32 @@
-index 6e8295110e..89d277b523 100644
+index 9a6bd976c2..55c29e31a2 100644
@@ -33,2 +35,2 @@
-@@ -2060,6 +2060,7 @@ struct mlx5_priv {
- RTE_ATOMIC(uint32_t) hws_mark_refcnt; /* HWS mark action reference counter. */
+@@ -1938,6 +1938,7 @@ struct mlx5_priv {
+ uint32_t hws_mark_refcnt; /* HWS mark action reference counter. */
@@ -42 +44 @@
-index d631ed150c..16ddd05448 100644
+index 6286eef010..1e9484f372 100644
@@ -45 +47 @@
-@@ -8118,7 +8118,10 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type,
+@@ -8101,7 +8101,9 @@ mlx5_flow_list_flush(struct rte_eth_dev *dev, enum mlx5_flow_type type,
@@ -52 +54 @@
-+ return;
+ return;
@@ -55 +56,0 @@
- MLX5_IPOOL_FOREACH(priv->flows[type], fidx, flow) {
@@ -57 +58 @@
-index d437835b73..0737f60272 100644
+index aa2e8fd9e3..6d28bcb57c 100644
@@ -60 +61 @@
-@@ -2894,6 +2894,7 @@ static void
+@@ -2895,6 +2895,7 @@ static void
@@ -68 +69 @@
-@@ -2903,9 +2904,10 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
+@@ -2904,9 +2905,10 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix miniCQEs number calculation' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (73 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix Rx queue reference count in flushing flows' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'bus/dpaa: fix lock condition during error handling' " Xueming Li
` (21 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Alexander Kozyrev; +Cc: Xueming Li, Viacheslav Ovsiienko, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1ca97699fc3a5459ad9e820fac0ae1d96633b1ee
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1ca97699fc3a5459ad9e820fac0ae1d96633b1ee Mon Sep 17 00:00:00 2001
From: Alexander Kozyrev <akozyrev@nvidia.com>
Date: Wed, 13 Nov 2024 15:50:54 +0200
Subject: [PATCH] net/mlx5: fix miniCQEs number calculation
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit a7ae9ba1f8c888a7ed546a88a954426477cd24a4 ]
Use the information from the CQE, not from the title packet,
for getting the number of miniCQEs in the compressed CQEs array.
This way we can avoid segfaults in the rxq_cq_decompress_v()
in case of mbuf corruption (due to double mbuf free, for example).
Fixes: 6cb559d67b83 ("net/mlx5: add vectorized Rx/Tx burst for x86")
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_rxtx_vec_altivec.h | 3 +--
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 3 +--
drivers/net/mlx5/mlx5_rxtx_vec_sse.h | 3 +--
3 files changed, 3 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
index f6e74f4180..efe0db4ca5 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_altivec.h
@@ -96,8 +96,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
11, 10, 9, 8}; /* bswap32, rss */
/* Restore the compressed count. Must be 16 bits. */
uint16_t mcqe_n = (rxq->cqe_comp_layout) ?
- (MLX5_CQE_NUM_MINIS(cq->op_own) + 1) :
- t_pkt->data_len + (rxq->crc_present * RTE_ETHER_CRC_LEN);
+ (MLX5_CQE_NUM_MINIS(cq->op_own) + 1U) : rte_be_to_cpu_32(cq->byte_cnt);
uint16_t pkts_n = mcqe_n;
const __vector unsigned char rearm =
(__vector unsigned char)vec_vsx_ld(0,
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 942d395dc9..02817a9645 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -95,8 +95,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
};
/* Restore the compressed count. Must be 16 bits. */
uint16_t mcqe_n = (rxq->cqe_comp_layout) ?
- (MLX5_CQE_NUM_MINIS(cq->op_own) + 1) :
- t_pkt->data_len + (rxq->crc_present * RTE_ETHER_CRC_LEN);
+ (MLX5_CQE_NUM_MINIS(cq->op_own) + 1U) : rte_be_to_cpu_32(cq->byte_cnt);
uint16_t pkts_n = mcqe_n;
const uint64x2_t rearm =
vld1q_u64((void *)&t_pkt->rearm_data);
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
index fb59c11346..e7271abef6 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_sse.h
@@ -94,8 +94,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
-1, -1, -1, -1 /* skip packet_type */);
/* Restore the compressed count. Must be 16 bits. */
uint16_t mcqe_n = (rxq->cqe_comp_layout) ?
- (MLX5_CQE_NUM_MINIS(cq->op_own) + 1) :
- t_pkt->data_len + (rxq->crc_present * RTE_ETHER_CRC_LEN);
+ (MLX5_CQE_NUM_MINIS(cq->op_own) + 1U) : rte_be_to_cpu_32(cq->byte_cnt);
uint16_t pkts_n = mcqe_n;
const __m128i rearm =
_mm_loadu_si128((__m128i *)&t_pkt->rearm_data);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.615555002 +0800
+++ 0075-net-mlx5-fix-miniCQEs-number-calculation.patch 2024-12-06 23:26:44.073044826 +0800
@@ -1 +1 @@
-From a7ae9ba1f8c888a7ed546a88a954426477cd24a4 Mon Sep 17 00:00:00 2001
+From 1ca97699fc3a5459ad9e820fac0ae1d96633b1ee Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit a7ae9ba1f8c888a7ed546a88a954426477cd24a4 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index ca614ecf9d..240987d03d 100644
+index f6e74f4180..efe0db4ca5 100644
@@ -26 +28 @@
-@@ -98,8 +98,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -96,8 +96,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
@@ -37 +39 @@
-index 519fff5b2c..dc1d30753d 100644
+index 942d395dc9..02817a9645 100644
@@ -40 +42 @@
-@@ -98,8 +98,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -95,8 +95,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
@@ -51 +53 @@
-index 0a2b67e750..81a177fce7 100644
+index fb59c11346..e7271abef6 100644
@@ -54 +56 @@
-@@ -96,8 +96,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
+@@ -94,8 +94,7 @@ rxq_cq_decompress_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'bus/dpaa: fix lock condition during error handling' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (74 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix miniCQEs number calculation' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/iavf: add segment-length check to Tx prep' " Xueming Li
` (20 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Hemant Agrawal; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4aeba4668f3ad2f4fd908a46d0ed687c2b575870
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4aeba4668f3ad2f4fd908a46d0ed687c2b575870 Mon Sep 17 00:00:00 2001
From: Hemant Agrawal <hemant.agrawal@nxp.com>
Date: Thu, 14 Nov 2024 13:14:35 +0530
Subject: [PATCH] bus/dpaa: fix lock condition during error handling
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c7c3a329750b81bdaeb3f7ceffac0ec3a65f61f8 ]
The error handling is missing FQ unlock code.
Detected by pvs-studio
Bug 89-93: very suspicious synchronization
The analyzer issued a pack of V1020 warnings that a resource
might remain blocked.
Fixes: c47ff048b99a ("bus/dpaa: add QMAN driver core routines")
Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/bus/dpaa/base/qbman/qman.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c
index f06992ca48..3a1a843ba0 100644
--- a/drivers/bus/dpaa/base/qbman/qman.c
+++ b/drivers/bus/dpaa/base/qbman/qman.c
@@ -2169,8 +2169,10 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
if (!p->vdqcr_owned) {
FQLOCK(fq);
- if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+ if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+ FQUNLOCK(fq);
goto escape;
+ }
fq_set(fq, QMAN_FQ_STATE_VDQCR);
FQUNLOCK(fq);
p->vdqcr_owned = fq;
@@ -2203,8 +2205,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
if (!p->vdqcr_owned) {
FQLOCK(fq);
- if (fq_isset(fq, QMAN_FQ_STATE_VDQCR))
+ if (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {
+ FQUNLOCK(fq);
goto escape;
+ }
fq_set(fq, QMAN_FQ_STATE_VDQCR);
FQUNLOCK(fq);
p->vdqcr_owned = fq;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.649381102 +0800
+++ 0076-bus-dpaa-fix-lock-condition-during-error-handling.patch 2024-12-06 23:26:44.073044826 +0800
@@ -1 +1 @@
-From c7c3a329750b81bdaeb3f7ceffac0ec3a65f61f8 Mon Sep 17 00:00:00 2001
+From 4aeba4668f3ad2f4fd908a46d0ed687c2b575870 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c7c3a329750b81bdaeb3f7ceffac0ec3a65f61f8 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 9c90ee25a6..c48fa3e073 100644
+index f06992ca48..3a1a843ba0 100644
@@ -24 +26 @@
-@@ -2138,8 +2138,10 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
+@@ -2169,8 +2169,10 @@ int qman_set_vdq(struct qman_fq *fq, u16 num, uint32_t vdqcr_flags)
@@ -36 +38 @@
-@@ -2172,8 +2174,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
+@@ -2203,8 +2205,10 @@ int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/iavf: add segment-length check to Tx prep' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (75 preceding siblings ...)
2024-12-07 8:00 ` patch 'bus/dpaa: fix lock condition during error handling' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/i40e: check register read for outer VLAN' " Xueming Li
` (19 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Bruce Richardson
Cc: Xueming Li, Padraig Connolly, Vladimir Medvedkin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=ef3dab691fbfae50f5f9ead05f4f6e2503d1c386
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From ef3dab691fbfae50f5f9ead05f4f6e2503d1c386 Mon Sep 17 00:00:00 2001
From: Bruce Richardson <bruce.richardson@intel.com>
Date: Mon, 11 Nov 2024 16:42:20 +0000
Subject: [PATCH] net/iavf: add segment-length check to Tx prep
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4523e0753b243066357f98fd9739fde72605d0fb ]
In the Tx prep function, the metadata checks were only checking the
packet length and ignoring the data length. For single-buffer packets we
can quickly check that the data length is the packet length.
Fixes: 19ee91c6bd9a ("net/iavf: check illegal packet sizes")
Reported-by: Padraig Connolly <padraig.j.connolly@intel.com>
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Tested-by: Padraig Connolly <padraig.j.connolly@intel.com>
---
drivers/net/iavf/iavf_rxtx.c | 6 +++++-
1 file changed, 5 insertions(+), 1 deletion(-)
diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c
index ec0dffa30e..5fbc581b95 100644
--- a/drivers/net/iavf/iavf_rxtx.c
+++ b/drivers/net/iavf/iavf_rxtx.c
@@ -3668,7 +3668,11 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
return i;
}
- if (m->pkt_len < IAVF_TX_MIN_PKT_LEN) {
+ /* valid packets are greater than min size, and single-buffer pkts
+ * must have data_len == pkt_len
+ */
+ if (m->pkt_len < IAVF_TX_MIN_PKT_LEN ||
+ (m->nb_segs == 1 && m->data_len != m->pkt_len)) {
rte_errno = EINVAL;
return i;
}
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.679573103 +0800
+++ 0077-net-iavf-add-segment-length-check-to-Tx-prep.patch 2024-12-06 23:26:44.083044826 +0800
@@ -1 +1 @@
-From 4523e0753b243066357f98fd9739fde72605d0fb Mon Sep 17 00:00:00 2001
+From ef3dab691fbfae50f5f9ead05f4f6e2503d1c386 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4523e0753b243066357f98fd9739fde72605d0fb ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 4850b9e381..6a093c6746 100644
+index ec0dffa30e..5fbc581b95 100644
@@ -25 +27 @@
-@@ -3677,7 +3677,11 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
+@@ -3668,7 +3668,11 @@ iavf_prep_pkts(__rte_unused void *tx_queue, struct rte_mbuf **tx_pkts,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/i40e: check register read for outer VLAN' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (76 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/iavf: add segment-length check to Tx prep' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'common/dpaax/caamflib: enable fallthrough warnings' " Xueming Li
` (18 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Vladimir Medvedkin; +Cc: Xueming Li, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=fc13431853e9c89003bf6262a51b7eb935963852
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From fc13431853e9c89003bf6262a51b7eb935963852 Mon Sep 17 00:00:00 2001
From: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Date: Fri, 15 Nov 2024 19:14:25 +0000
Subject: [PATCH] net/i40e: check register read for outer VLAN
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit c11c52dd5d2a19c97616ac32a1d4911c48f157d4 ]
'i40e_get_outer_vlan()' does not check 'i40e_aq_debug_read_register()'
return value. This patch fixes this issue, by checking the return value
and, on error, having the i40e_get_outer_vlan() function return that
error back to the caller.
This in turn requires a change in the return type of that function and
updates to the places where it is called to:
* handle the error, and
* handle the tpid being returned as an "out" parameter rather than
return code.
Coverity issue: 445518
Fixes: 86eb05d6350b ("net/i40e: add flow validate function")
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/i40e/i40e_flow.c | 77 ++++++++++++++++++++++++++++++------
1 file changed, 65 insertions(+), 12 deletions(-)
diff --git a/drivers/net/i40e/i40e_flow.c b/drivers/net/i40e/i40e_flow.c
index 92165c8422..273cb2d80c 100644
--- a/drivers/net/i40e/i40e_flow.c
+++ b/drivers/net/i40e/i40e_flow.c
@@ -1263,27 +1263,31 @@ i40e_flow_parse_attr(const struct rte_flow_attr *attr,
return 0;
}
-static uint16_t
-i40e_get_outer_vlan(struct rte_eth_dev *dev)
+static int
+i40e_get_outer_vlan(struct rte_eth_dev *dev, uint16_t *tpid)
{
struct i40e_hw *hw = I40E_DEV_PRIVATE_TO_HW(dev->data->dev_private);
int qinq = dev->data->dev_conf.rxmode.offloads &
RTE_ETH_RX_OFFLOAD_VLAN_EXTEND;
uint64_t reg_r = 0;
uint16_t reg_id;
- uint16_t tpid;
+ int ret;
if (qinq)
reg_id = 2;
else
reg_id = 3;
- i40e_aq_debug_read_register(hw, I40E_GL_SWT_L2TAGCTRL(reg_id),
+ ret = i40e_aq_debug_read_register(hw, I40E_GL_SWT_L2TAGCTRL(reg_id),
®_r, NULL);
+ if (ret != I40E_SUCCESS) {
+ PMD_DRV_LOG(ERR, "Failed to read from L2 tag ctrl register [%d]", reg_id);
+ return -EIO;
+ }
- tpid = (reg_r >> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT) & 0xFFFF;
+ *tpid = (reg_r >> I40E_GL_SWT_L2TAGCTRL_ETHERTYPE_SHIFT) & 0xFFFF;
- return tpid;
+ return 0;
}
/* 1. Last in item should be NULL as range is not supported.
@@ -1303,6 +1307,8 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
const struct rte_flow_item_eth *eth_spec;
const struct rte_flow_item_eth *eth_mask;
enum rte_flow_item_type item_type;
+ int ret;
+ uint16_t tpid;
for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) {
if (item->last) {
@@ -1361,8 +1367,23 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
if (filter->ether_type == RTE_ETHER_TYPE_IPV4 ||
filter->ether_type == RTE_ETHER_TYPE_IPV6 ||
- filter->ether_type == RTE_ETHER_TYPE_LLDP ||
- filter->ether_type == i40e_get_outer_vlan(dev)) {
+ filter->ether_type == RTE_ETHER_TYPE_LLDP) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Unsupported ether_type in control packet filter.");
+ return -rte_errno;
+ }
+
+ ret = i40e_get_outer_vlan(dev, &tpid);
+ if (ret != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Can not get the Ethertype identifying the L2 tag");
+ return -rte_errno;
+ }
+ if (filter->ether_type == tpid) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item,
@@ -1370,6 +1391,7 @@ i40e_flow_parse_ethertype_pattern(struct rte_eth_dev *dev,
" control packet filter.");
return -rte_errno;
}
+
break;
default:
break;
@@ -1641,6 +1663,7 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
bool outer_ip = true;
uint8_t field_idx;
int ret;
+ uint16_t tpid;
memset(off_arr, 0, sizeof(off_arr));
memset(len_arr, 0, sizeof(len_arr));
@@ -1709,14 +1732,29 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
ether_type = rte_be_to_cpu_16(eth_spec->hdr.ether_type);
if (ether_type == RTE_ETHER_TYPE_IPV4 ||
- ether_type == RTE_ETHER_TYPE_IPV6 ||
- ether_type == i40e_get_outer_vlan(dev)) {
+ ether_type == RTE_ETHER_TYPE_IPV6) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item,
"Unsupported ether_type.");
return -rte_errno;
}
+ ret = i40e_get_outer_vlan(dev, &tpid);
+ if (ret != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Can not get the Ethertype identifying the L2 tag");
+ return -rte_errno;
+ }
+ if (ether_type == tpid) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Unsupported ether_type.");
+ return -rte_errno;
+ }
+
input_set |= I40E_INSET_LAST_ETHER_TYPE;
filter->input.flow.l2_flow.ether_type =
eth_spec->hdr.ether_type;
@@ -1763,14 +1801,29 @@ i40e_flow_parse_fdir_pattern(struct rte_eth_dev *dev,
rte_be_to_cpu_16(vlan_spec->hdr.eth_proto);
if (ether_type == RTE_ETHER_TYPE_IPV4 ||
- ether_type == RTE_ETHER_TYPE_IPV6 ||
- ether_type == i40e_get_outer_vlan(dev)) {
+ ether_type == RTE_ETHER_TYPE_IPV6) {
rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ITEM,
item,
"Unsupported inner_type.");
return -rte_errno;
}
+ ret = i40e_get_outer_vlan(dev, &tpid);
+ if (ret != 0) {
+ rte_flow_error_set(error, EIO,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Can not get the Ethertype identifying the L2 tag");
+ return -rte_errno;
+ }
+ if (ether_type == tpid) {
+ rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ item,
+ "Unsupported ether_type.");
+ return -rte_errno;
+ }
+
input_set |= I40E_INSET_LAST_ETHER_TYPE;
filter->input.flow.l2_flow.ether_type =
vlan_spec->hdr.eth_proto;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.712168903 +0800
+++ 0078-net-i40e-check-register-read-for-outer-VLAN.patch 2024-12-06 23:26:44.083044826 +0800
@@ -1 +1 @@
-From c11c52dd5d2a19c97616ac32a1d4911c48f157d4 Mon Sep 17 00:00:00 2001
+From fc13431853e9c89003bf6262a51b7eb935963852 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit c11c52dd5d2a19c97616ac32a1d4911c48f157d4 ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index c6857727e8..cd598431e1 100644
+index 92165c8422..273cb2d80c 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'common/dpaax/caamflib: enable fallthrough warnings' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (77 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/i40e: check register read for outer VLAN' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/e1000/base: fix fallthrough in switch' " Xueming Li
` (17 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Hemant Agrawal, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=a6c7a5855c7bbe729aebfb297bc0bd99ab7421a2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From a6c7a5855c7bbe729aebfb297bc0bd99ab7421a2 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 14 Nov 2024 09:11:54 -0800
Subject: [PATCH] common/dpaax/caamflib: enable fallthrough warnings
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 277552e175b3529863adec9bbd8bb6288164506e ]
Fallthrough warnings catch real bugs and should not be disabled.
There are warnings in this driver in current build.
The commit that added the disable is old, and the problematic code
appears to have been already removed.
Fixes: 2ab9a9483196 ("crypto/dpaa2_sec: fix build with GCC 7")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
drivers/common/dpaax/caamflib/rta/operation_cmd.h | 4 ----
1 file changed, 4 deletions(-)
diff --git a/drivers/common/dpaax/caamflib/rta/operation_cmd.h b/drivers/common/dpaax/caamflib/rta/operation_cmd.h
index fe1ac37ee8..563735eb88 100644
--- a/drivers/common/dpaax/caamflib/rta/operation_cmd.h
+++ b/drivers/common/dpaax/caamflib/rta/operation_cmd.h
@@ -7,10 +7,6 @@
#ifndef __RTA_OPERATION_CMD_H__
#define __RTA_OPERATION_CMD_H__
-#if defined(RTE_TOOLCHAIN_GCC) && (GCC_VERSION >= 70000)
-#pragma GCC diagnostic ignored "-Wimplicit-fallthrough"
-#endif
-
extern enum rta_sec_era rta_sec_era;
static inline int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.741908603 +0800
+++ 0079-common-dpaax-caamflib-enable-fallthrough-warnings.patch 2024-12-06 23:26:44.083044826 +0800
@@ -1 +1 @@
-From 277552e175b3529863adec9bbd8bb6288164506e Mon Sep 17 00:00:00 2001
+From a6c7a5855c7bbe729aebfb297bc0bd99ab7421a2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 277552e175b3529863adec9bbd8bb6288164506e ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/e1000/base: fix fallthrough in switch' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (78 preceding siblings ...)
2024-12-07 8:00 ` patch 'common/dpaax/caamflib: enable fallthrough warnings' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'app/procinfo: fix leak on exit' " Xueming Li
` (16 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=9a80a0abaf4df9facbefdded1635cbccdab353ac
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 9a80a0abaf4df9facbefdded1635cbccdab353ac Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 14 Nov 2024 09:11:55 -0800
Subject: [PATCH] net/e1000/base: fix fallthrough in switch
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 11a5adba21237d5905bcb3f5f695aa5a5cfecd9f ]
There is an incorrect fallthrough identified by PVS studio.
Even though this is in base code it should be fixed, and
the warning should be re-enabled to prevent future bugs.
Link: https://pvs-studio.com/en/blog/posts/cpp/1183/
Fixes: f2553cb9eba6 ("net/e1000/base: add new I219 devices")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
drivers/net/e1000/base/e1000_82575.c | 1 +
drivers/net/e1000/base/e1000_api.c | 1 +
drivers/net/e1000/base/meson.build | 3 +--
3 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/drivers/net/e1000/base/e1000_82575.c b/drivers/net/e1000/base/e1000_82575.c
index 7c78649393..53900cf8f1 100644
--- a/drivers/net/e1000/base/e1000_82575.c
+++ b/drivers/net/e1000/base/e1000_82575.c
@@ -1722,6 +1722,7 @@ STATIC s32 e1000_get_media_type_82575(struct e1000_hw *hw)
break;
}
/* Fall through for I2C based SGMII */
+ /* Fall through */
case E1000_CTRL_EXT_LINK_MODE_PCIE_SERDES:
/* read media type from SFP EEPROM */
ret_val = e1000_set_sfp_media_type_82575(hw);
diff --git a/drivers/net/e1000/base/e1000_api.c b/drivers/net/e1000/base/e1000_api.c
index 0f6e5afa3b..6697b4b64f 100644
--- a/drivers/net/e1000/base/e1000_api.c
+++ b/drivers/net/e1000/base/e1000_api.c
@@ -295,6 +295,7 @@ s32 e1000_set_mac_type(struct e1000_hw *hw)
case E1000_DEV_ID_PCH_RPL_I219_LM23:
case E1000_DEV_ID_PCH_RPL_I219_V23:
mac->type = e1000_pch_tgp;
+ break;
case E1000_DEV_ID_PCH_ADL_I219_LM17:
case E1000_DEV_ID_PCH_ADL_I219_V17:
case E1000_DEV_ID_PCH_RPL_I219_LM22:
diff --git a/drivers/net/e1000/base/meson.build b/drivers/net/e1000/base/meson.build
index 528a33f958..5a7a87f8a7 100644
--- a/drivers/net/e1000/base/meson.build
+++ b/drivers/net/e1000/base/meson.build
@@ -23,8 +23,7 @@ sources = [
]
error_cflags = ['-Wno-uninitialized', '-Wno-unused-parameter',
- '-Wno-unused-variable', '-Wno-misleading-indentation',
- '-Wno-implicit-fallthrough']
+ '-Wno-unused-variable', '-Wno-misleading-indentation']
c_args = cflags
foreach flag: error_cflags
if cc.has_argument(flag)
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.770313904 +0800
+++ 0080-net-e1000-base-fix-fallthrough-in-switch.patch 2024-12-06 23:26:44.083044826 +0800
@@ -1 +1 @@
-From 11a5adba21237d5905bcb3f5f695aa5a5cfecd9f Mon Sep 17 00:00:00 2001
+From 9a80a0abaf4df9facbefdded1635cbccdab353ac Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 11a5adba21237d5905bcb3f5f695aa5a5cfecd9f ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -19,2 +21,2 @@
- drivers/net/e1000/base/meson.build | 1 -
- 3 files changed, 2 insertions(+), 1 deletion(-)
+ drivers/net/e1000/base/meson.build | 3 +--
+ 3 files changed, 3 insertions(+), 2 deletions(-)
@@ -47 +49 @@
-index 6d6048488f..e73f3d6d55 100644
+index 528a33f958..5a7a87f8a7 100644
@@ -50,5 +52 @@
-@@ -24,7 +24,6 @@ sources = [
-
- error_cflags = [
- '-Wno-unused-parameter',
-- '-Wno-implicit-fallthrough',
+@@ -23,8 +23,7 @@ sources = [
@@ -55,0 +54,5 @@
+
+ error_cflags = ['-Wno-uninitialized', '-Wno-unused-parameter',
+- '-Wno-unused-variable', '-Wno-misleading-indentation',
+- '-Wno-implicit-fallthrough']
++ '-Wno-unused-variable', '-Wno-misleading-indentation']
@@ -57,0 +61 @@
+ if cc.has_argument(flag)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/procinfo: fix leak on exit' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (79 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/e1000/base: fix fallthrough in switch' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'member: fix choice of bucket for displacement' " Xueming Li
` (15 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Fidaullah Noonari
Cc: Xueming Li, Stephen Hemminger, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=8dffe1833feaac46a3a4c48c2b8469a2e2007cb7
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 8dffe1833feaac46a3a4c48c2b8469a2e2007cb7 Mon Sep 17 00:00:00 2001
From: Fidaullah Noonari <fidaullah.noonari@emumba.com>
Date: Thu, 3 Oct 2024 19:48:29 -0700
Subject: [PATCH] app/procinfo: fix leak on exit
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8a171e52ed8b26f768ced79a22286914ebd30180 ]
When app is launched with -m proc-info exit without
rte_eal_cleanup() causing memory leakage. This commit resolves the
memory leakage issue and closes app properly.
Bugzilla ID: 898
Fixes: 67684d1e87b6 ("app/procinfo: call EAL cleanup before exit")
Fixes: 674bb3906931 ("app/procinfo: display eventdev xstats")
Signed-off-by: Fidaullah Noonari <fidaullah.noonari@emumba.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/proc-info/main.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/app/proc-info/main.c b/app/proc-info/main.c
index b672aaefbe..4a558705cc 100644
--- a/app/proc-info/main.c
+++ b/app/proc-info/main.c
@@ -2166,11 +2166,11 @@ main(int argc, char **argv)
if (mem_info) {
meminfo_display();
- return 0;
+ goto cleanup;
}
if (eventdev_xstats() > 0)
- return 0;
+ goto cleanup;
nb_ports = rte_eth_dev_count_avail();
if (nb_ports == 0)
@@ -2251,6 +2251,7 @@ main(int argc, char **argv)
RTE_ETH_FOREACH_DEV(i)
rte_eth_dev_close(i);
+cleanup:
ret = rte_eal_cleanup();
if (ret)
printf("Error from rte_eal_cleanup(), %d\n", ret);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.802334304 +0800
+++ 0081-app-procinfo-fix-leak-on-exit.patch 2024-12-06 23:26:44.083044826 +0800
@@ -1 +1 @@
-From 8a171e52ed8b26f768ced79a22286914ebd30180 Mon Sep 17 00:00:00 2001
+From 8dffe1833feaac46a3a4c48c2b8469a2e2007cb7 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8a171e52ed8b26f768ced79a22286914ebd30180 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
@@ -23 +25 @@
-index 6886eb373a..e1272164b1 100644
+index b672aaefbe..4a558705cc 100644
@@ -26 +28 @@
-@@ -2171,11 +2171,11 @@ main(int argc, char **argv)
+@@ -2166,11 +2166,11 @@ main(int argc, char **argv)
@@ -40 +42 @@
-@@ -2256,6 +2256,7 @@ main(int argc, char **argv)
+@@ -2251,6 +2251,7 @@ main(int argc, char **argv)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'member: fix choice of bucket for displacement' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (80 preceding siblings ...)
2024-12-07 8:00 ` patch 'app/procinfo: fix leak on exit' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'app/testpmd: fix aged flow destroy' " Xueming Li
` (14 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=c6840df8b593bf813e005fddd03c7de1a7684bc9
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From c6840df8b593bf813e005fddd03c7de1a7684bc9 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Fri, 15 Nov 2024 17:12:29 -0800
Subject: [PATCH] member: fix choice of bucket for displacement
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 33f5b0dcb11580be8091f3b589845e512008e2f0 ]
Because of misuse of & vs && operator, the member code would
always use the primary bucket.
Fixes: 904ec78a239c ("member: implement HT mode")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/member/rte_member_ht.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/member/rte_member_ht.c b/lib/member/rte_member_ht.c
index a85561b472..0d0376b264 100644
--- a/lib/member/rte_member_ht.c
+++ b/lib/member/rte_member_ht.c
@@ -493,7 +493,7 @@ rte_member_add_ht(const struct rte_member_setsum *ss,
return ret;
/* Random pick prim or sec for recursive displacement */
- uint32_t select_bucket = (tmp_sig && 1U) ? prim_bucket : sec_bucket;
+ uint32_t select_bucket = (tmp_sig & 1U) ? prim_bucket : sec_bucket;
if (ss->cache) {
ret = evict_from_bucket();
buckets[select_bucket].sigs[ret] = tmp_sig;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.833508205 +0800
+++ 0082-member-fix-choice-of-bucket-for-displacement.patch 2024-12-06 23:26:44.083044826 +0800
@@ -1 +1 @@
-From 33f5b0dcb11580be8091f3b589845e512008e2f0 Mon Sep 17 00:00:00 2001
+From c6840df8b593bf813e005fddd03c7de1a7684bc9 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 33f5b0dcb11580be8091f3b589845e512008e2f0 ]
@@ -10 +12,0 @@
-Cc: stable@dpdk.org
@@ -18 +20 @@
-index 357097ff4b..738471b378 100644
+index a85561b472..0d0376b264 100644
@@ -21 +23 @@
-@@ -494,7 +494,7 @@ rte_member_add_ht(const struct rte_member_setsum *ss,
+@@ -493,7 +493,7 @@ rte_member_add_ht(const struct rte_member_setsum *ss,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/testpmd: fix aged flow destroy' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (81 preceding siblings ...)
2024-12-07 8:00 ` patch 'member: fix choice of bucket for displacement' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix shared Rx queue control release' " Xueming Li
` (13 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Danylo Vodopianov; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=44ba5aae390e412a09968b2aca2f80cd2c4ec06b
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 44ba5aae390e412a09968b2aca2f80cd2c4ec06b Mon Sep 17 00:00:00 2001
From: Danylo Vodopianov <dvo-plv@napatech.com>
Date: Mon, 18 Nov 2024 19:03:23 +0100
Subject: [PATCH] app/testpmd: fix aged flow destroy
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 098f949f8a70f7618f5390f9c1e9edfb9e5469c4 ]
port_flow_destroy() function never assumed that rule array can be freed
when it's executing, and port_flow_aged() just violated that assumption.
In case of flow async create failure, it tries to do a cleanup, but it
wrongly removes a 1st flow (with id 0). pf->id is not set at this
moment and it always is 0, thus 1st flow is removed. A local copy of
flow->id must be used to call of port_flow_destroy() to avoid access
and processing of flow->id after the flow is removed.
Fixes: de956d5ecf08 ("app/testpmd: support age shared action context")
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
app/test-pmd/config.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index 40e4e83fb8..ff2e9da324 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -3913,8 +3913,10 @@ port_flow_aged(portid_t port_id, uint8_t destroy)
}
type = (enum age_action_context_type *)contexts[idx];
switch (*type) {
- case ACTION_AGE_CONTEXT_TYPE_FLOW:
+ case ACTION_AGE_CONTEXT_TYPE_FLOW: {
+ uint64_t flow_id;
ctx.pf = container_of(type, struct port_flow, age_type);
+ flow_id = ctx.pf->id;
printf("%-20s\t%" PRIu64 "\t%" PRIu32 "\t%" PRIu32
"\t%c%c%c\t\n",
"Flow",
@@ -3925,9 +3927,10 @@ port_flow_aged(portid_t port_id, uint8_t destroy)
ctx.pf->rule.attr->egress ? 'e' : '-',
ctx.pf->rule.attr->transfer ? 't' : '-');
if (destroy && !port_flow_destroy(port_id, 1,
- &ctx.pf->id, false))
+ &flow_id, false))
total++;
break;
+ }
case ACTION_AGE_CONTEXT_TYPE_INDIRECT_ACTION:
ctx.pia = container_of(type,
struct port_indirect_action, age_type);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.863741505 +0800
+++ 0083-app-testpmd-fix-aged-flow-destroy.patch 2024-12-06 23:26:44.093044826 +0800
@@ -1 +1 @@
-From 098f949f8a70f7618f5390f9c1e9edfb9e5469c4 Mon Sep 17 00:00:00 2001
+From 44ba5aae390e412a09968b2aca2f80cd2c4ec06b Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 098f949f8a70f7618f5390f9c1e9edfb9e5469c4 ]
@@ -16 +18,0 @@
-Cc: stable@dpdk.org
@@ -25 +27 @@
-index c831166431..28d45568ac 100644
+index 40e4e83fb8..ff2e9da324 100644
@@ -28 +30 @@
-@@ -4160,8 +4160,10 @@ port_flow_aged(portid_t port_id, uint8_t destroy)
+@@ -3913,8 +3913,10 @@ port_flow_aged(portid_t port_id, uint8_t destroy)
@@ -40 +42 @@
-@@ -4172,9 +4174,10 @@ port_flow_aged(portid_t port_id, uint8_t destroy)
+@@ -3925,9 +3927,10 @@ port_flow_aged(portid_t port_id, uint8_t destroy)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/mlx5: fix shared Rx queue control release' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (82 preceding siblings ...)
2024-12-07 8:00 ` patch 'app/testpmd: fix aged flow destroy' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'vhost: fix deadlock in Rx async path' " Xueming Li
` (12 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Bing Zhao; +Cc: Xueming Li, Dariusz Sosnowski, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=b4da83237f4577c1b8e97f97e5f9a9b017726270
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From b4da83237f4577c1b8e97f97e5f9a9b017726270 Mon Sep 17 00:00:00 2001
From: Bing Zhao <bingz@nvidia.com>
Date: Mon, 25 Nov 2024 19:23:18 +0200
Subject: [PATCH] net/mlx5: fix shared Rx queue control release
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f8f294c66b5ff6ee89590cce56a3d733513ff9a0 ]
Correct the reference counting and condition checking for shared Rx
queue control structures. This fix ensures proper memory management
during port stop and device close stages.
The changes move the control structure reference count decrease
outside the owners list empty condition, and adjust the reference
count check to subtract first, then evaluate.
This prevents potential crashes during port restart by
ensuring shared Rx queues' control structures are properly freed.
Fixes: 3c9a82fa6edc ("net/mlx5: fix Rx queue control management")
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
---
drivers/net/mlx5/mlx5_rx.h | 2 +-
drivers/net/mlx5/mlx5_rxq.c | 12 ++++++------
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h
index f78fae26d3..db912adf2a 100644
--- a/drivers/net/mlx5/mlx5_rx.h
+++ b/drivers/net/mlx5/mlx5_rx.h
@@ -157,7 +157,7 @@ struct mlx5_rxq_ctrl {
bool is_hairpin; /* Whether RxQ type is Hairpin. */
unsigned int socket; /* CPU socket ID for allocations. */
LIST_ENTRY(mlx5_rxq_ctrl) share_entry; /* Entry in shared RXQ list. */
- RTE_ATOMIC(uint32_t) ctrl_ref; /* Reference counter. */
+ RTE_ATOMIC(int32_t) ctrl_ref; /* Reference counter. */
uint32_t share_group; /* Group ID of shared RXQ. */
uint16_t share_qid; /* Shared RxQ ID in group. */
unsigned int started:1; /* Whether (shared) RXQ has been started. */
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 6d28bcb57c..dccfc4eb36 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -2269,6 +2269,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
struct mlx5_rxq_priv *rxq;
struct mlx5_rxq_ctrl *rxq_ctrl;
uint32_t refcnt;
+ int32_t ctrl_ref;
if (priv->rxq_privs == NULL)
return 0;
@@ -2294,15 +2295,14 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
}
} else { /* Refcnt zero, closing device. */
LIST_REMOVE(rxq, owner_entry);
- if (LIST_EMPTY(&rxq_ctrl->owners)) {
+ ctrl_ref = rte_atomic_fetch_sub_explicit(&rxq_ctrl->ctrl_ref, 1,
+ rte_memory_order_relaxed) - 1;
+ if (ctrl_ref == 1 && LIST_EMPTY(&rxq_ctrl->owners)) {
if (!rxq_ctrl->is_hairpin)
mlx5_mr_btree_free
(&rxq_ctrl->rxq.mr_ctrl.cache_bh);
- if (rte_atomic_fetch_sub_explicit(&rxq_ctrl->ctrl_ref, 1,
- rte_memory_order_relaxed) == 1) {
- LIST_REMOVE(rxq_ctrl, share_entry);
- mlx5_free(rxq_ctrl);
- }
+ LIST_REMOVE(rxq_ctrl, share_entry);
+ mlx5_free(rxq_ctrl);
}
dev->data->rx_queues[idx] = NULL;
mlx5_free(rxq);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.895414505 +0800
+++ 0084-net-mlx5-fix-shared-Rx-queue-control-release.patch 2024-12-06 23:26:44.093044826 +0800
@@ -1 +1 @@
-From f8f294c66b5ff6ee89590cce56a3d733513ff9a0 Mon Sep 17 00:00:00 2001
+From b4da83237f4577c1b8e97f97e5f9a9b017726270 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f8f294c66b5ff6ee89590cce56a3d733513ff9a0 ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index da7c448948..1a6f174c40 100644
+index f78fae26d3..db912adf2a 100644
@@ -41 +43 @@
-index 0737f60272..126b1970e6 100644
+index 6d28bcb57c..dccfc4eb36 100644
@@ -44 +46 @@
-@@ -2268,6 +2268,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+@@ -2269,6 +2269,7 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
@@ -52 +54 @@
-@@ -2293,15 +2294,14 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
+@@ -2294,15 +2295,14 @@ mlx5_rxq_release(struct rte_eth_dev *dev, uint16_t idx)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'vhost: fix deadlock in Rx async path' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (83 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/mlx5: fix shared Rx queue control release' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'net/txgbe: fix a mass of interrupts' " Xueming Li
` (11 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Maxime Coquelin, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=37f8c86e53d081589faa324c6d6b034e1752014e
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 37f8c86e53d081589faa324c6d6b034e1752014e Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Mon, 18 Nov 2024 08:24:24 -0800
Subject: [PATCH] vhost: fix deadlock in Rx async path
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 22aa9a9c7099e1f4b297899c33b4fea1131d3ac7 ]
If lock is acquired for write, it must be released for write
or a deadlock is likely.
Bugzilla ID: 1582
Fixes: 9fc93a1e2320 ("vhost: fix virtqueue access check in datapath")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
lib/vhost/virtio_net.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c
index 6d53ff932d..5ec89719c6 100644
--- a/lib/vhost/virtio_net.c
+++ b/lib/vhost/virtio_net.c
@@ -2546,7 +2546,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq,
if (unlikely(!vq->access_ok)) {
vhost_user_iotlb_rd_unlock(vq);
- rte_rwlock_read_unlock(&vq->access_lock);
+ rte_rwlock_write_unlock(&vq->access_lock);
virtio_dev_vring_translate(dev, vq);
goto out_no_unlock;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.932850606 +0800
+++ 0085-vhost-fix-deadlock-in-Rx-async-path.patch 2024-12-06 23:26:44.093044826 +0800
@@ -1 +1 @@
-From 22aa9a9c7099e1f4b297899c33b4fea1131d3ac7 Mon Sep 17 00:00:00 2001
+From 37f8c86e53d081589faa324c6d6b034e1752014e Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 22aa9a9c7099e1f4b297899c33b4fea1131d3ac7 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -20 +22 @@
-index 298a5dae74..d764d4bc6a 100644
+index 6d53ff932d..5ec89719c6 100644
@@ -23 +25 @@
-@@ -2538,7 +2538,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq,
+@@ -2546,7 +2546,7 @@ virtio_dev_rx_async_submit(struct virtio_net *dev, struct vhost_virtqueue *vq,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'net/txgbe: fix a mass of interrupts' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (84 preceding siblings ...)
2024-12-07 8:00 ` patch 'vhost: fix deadlock in Rx async path' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'pcapng: avoid potential unaligned data' " Xueming Li
` (10 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Jiawen Wu; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=6262d28b1f89515a423f33d06c1d908a65bf6bc3
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 6262d28b1f89515a423f33d06c1d908a65bf6bc3 Mon Sep 17 00:00:00 2001
From: Jiawen Wu <jiawenwu@trustnetic.com>
Date: Fri, 15 Nov 2024 16:33:36 +0800
Subject: [PATCH] net/txgbe: fix a mass of interrupts
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 916aa13f4a198aebf5383f9680cb5cd527518f2c ]
Since firmware version 0x20010, GPIO interrupt enable is set to 0xd by
default, which means enable bit 0 'tx_fault'. And GPIO interrupt polarity
is set to 0xd by default too, which means these interrupts are rising-edge
sensitive.
So when unplug the SFP module, GPIO line 0 'tx_fault' is 0 -> 1 triggers
the interrupt. However, the interrupt is not cleared. And GPIO interrupt
mask is enabled and disabled to trigger the MISC interrupt repeatedly.
Since this 'tx_fault' interrupt does not make much sense, simply clear it
to fix the issue.
Fixes: 12011b11a3d6 ("net/txgbe: adapt to MNG veto bit setting")
Signed-off-by: Jiawen Wu <jiawenwu@trustnetic.com>
---
drivers/net/txgbe/txgbe_ethdev.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/txgbe/txgbe_ethdev.c b/drivers/net/txgbe/txgbe_ethdev.c
index 9b44f11465..25b657d0ff 100644
--- a/drivers/net/txgbe/txgbe_ethdev.c
+++ b/drivers/net/txgbe/txgbe_ethdev.c
@@ -1553,6 +1553,9 @@ static void txgbe_reinit_gpio_intr(struct txgbe_hw *hw)
wr32(hw, TXGBE_GPIOINTMASK, 0xFF);
reg = rd32(hw, TXGBE_GPIORAWINTSTAT);
+ if (reg & TXGBE_GPIOBIT_0)
+ wr32(hw, TXGBE_GPIOEOI, TXGBE_GPIOBIT_0);
+
if (reg & TXGBE_GPIOBIT_2)
wr32(hw, TXGBE_GPIOEOI, TXGBE_GPIOBIT_2);
@@ -2776,6 +2779,8 @@ txgbe_dev_sfp_event(struct rte_eth_dev *dev)
wr32(hw, TXGBE_GPIOINTMASK, 0xFF);
reg = rd32(hw, TXGBE_GPIORAWINTSTAT);
+ if (reg & TXGBE_GPIOBIT_0)
+ wr32(hw, TXGBE_GPIOEOI, TXGBE_GPIOBIT_0);
if (reg & TXGBE_GPIOBIT_2) {
wr32(hw, TXGBE_GPIOEOI, TXGBE_GPIOBIT_2);
rte_eal_alarm_set(1000 * 100, txgbe_dev_detect_sfp, dev);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.964933206 +0800
+++ 0086-net-txgbe-fix-a-mass-of-interrupts.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From 916aa13f4a198aebf5383f9680cb5cd527518f2c Mon Sep 17 00:00:00 2001
+From 6262d28b1f89515a423f33d06c1d908a65bf6bc3 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 916aa13f4a198aebf5383f9680cb5cd527518f2c ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -27 +29 @@
-index a956216abb..ea9faba2c0 100644
+index 9b44f11465..25b657d0ff 100644
@@ -30 +32 @@
-@@ -1555,6 +1555,9 @@ static void txgbe_reinit_gpio_intr(struct txgbe_hw *hw)
+@@ -1553,6 +1553,9 @@ static void txgbe_reinit_gpio_intr(struct txgbe_hw *hw)
@@ -40 +42 @@
-@@ -2796,6 +2799,8 @@ txgbe_dev_sfp_event(struct rte_eth_dev *dev)
+@@ -2776,6 +2779,8 @@ txgbe_dev_sfp_event(struct rte_eth_dev *dev)
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'pcapng: avoid potential unaligned data' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (85 preceding siblings ...)
2024-12-07 8:00 ` patch 'net/txgbe: fix a mass of interrupts' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/bonding: fix loop on members' " Xueming Li
` (9 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=939b471310e29c5b7f9640f53e9ba22ce32e6441
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 939b471310e29c5b7f9640f53e9ba22ce32e6441 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Wed, 20 Nov 2024 15:06:35 -0800
Subject: [PATCH] pcapng: avoid potential unaligned data
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0cbf27521b0d6e7cb79f41a5e699d82562b09c03 ]
The buffer used to construct headers (which contain 32 bit values)
was declared as uint8_t which can lead to unaligned access.
Change to declare buffer as uint32_t.
Fixes: dc2d6d20047e ("pcapng: avoid using alloca")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
---
lib/pcapng/rte_pcapng.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/lib/pcapng/rte_pcapng.c b/lib/pcapng/rte_pcapng.c
index e5326c1d38..16485b27cb 100644
--- a/lib/pcapng/rte_pcapng.c
+++ b/lib/pcapng/rte_pcapng.c
@@ -33,8 +33,8 @@
/* conversion from DPDK speed to PCAPNG */
#define PCAPNG_MBPS_SPEED 1000000ull
-/* upper bound for section, stats and interface blocks */
-#define PCAPNG_BLKSIZ 2048
+/* upper bound for section, stats and interface blocks (in uint32_t) */
+#define PCAPNG_BLKSIZ (2048 / sizeof(uint32_t))
/* Format of the capture file handle */
struct rte_pcapng {
@@ -144,7 +144,7 @@ pcapng_section_block(rte_pcapng_t *self,
{
struct pcapng_section_header *hdr;
struct pcapng_option *opt;
- uint8_t buf[PCAPNG_BLKSIZ];
+ uint32_t buf[PCAPNG_BLKSIZ];
uint32_t len;
len = sizeof(*hdr);
@@ -212,7 +212,7 @@ rte_pcapng_add_interface(rte_pcapng_t *self, uint16_t port,
struct pcapng_option *opt;
const uint8_t tsresol = 9; /* nanosecond resolution */
uint32_t len;
- uint8_t buf[PCAPNG_BLKSIZ];
+ uint32_t buf[PCAPNG_BLKSIZ];
char ifname_buf[IF_NAMESIZE];
char ifhw[256];
uint64_t speed = 0;
@@ -330,7 +330,7 @@ rte_pcapng_write_stats(rte_pcapng_t *self, uint16_t port_id,
uint64_t start_time = self->offset_ns;
uint64_t sample_time;
uint32_t optlen, len;
- uint8_t buf[PCAPNG_BLKSIZ];
+ uint32_t buf[PCAPNG_BLKSIZ];
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.992965606 +0800
+++ 0087-pcapng-avoid-potential-unaligned-data.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From 0cbf27521b0d6e7cb79f41a5e699d82562b09c03 Mon Sep 17 00:00:00 2001
+From 939b471310e29c5b7f9640f53e9ba22ce32e6441 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0cbf27521b0d6e7cb79f41a5e699d82562b09c03 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/bonding: fix loop on members' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (86 preceding siblings ...)
2024-12-07 8:00 ` patch 'pcapng: avoid potential unaligned data' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/bonding: fix MAC address comparison' " Xueming Li
` (8 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Xueming Li, Bruce Richardson, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=7f673c3b33515090cab7d1d15d0f6459a3944b40
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 7f673c3b33515090cab7d1d15d0f6459a3944b40 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:22 -0800
Subject: [PATCH] test/bonding: fix loop on members
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 112ce3917674b7e316776305d7e27778d17eb1b7 ]
Do not use same variable for outer and inner loop in bonding test.
Since the loop is just freeing the resulting burst use bulk free.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: 92073ef961ee ("bond: unit tests")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/test/test_link_bonding.c | 7 +------
1 file changed, 1 insertion(+), 6 deletions(-)
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 4d54706c21..805613d7dd 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -2288,12 +2288,7 @@ test_activebackup_rx_burst(void)
}
/* free mbufs */
- for (i = 0; i < MAX_PKT_BURST; i++) {
- if (rx_pkt_burst[i] != NULL) {
- rte_pktmbuf_free(rx_pkt_burst[i]);
- rx_pkt_burst[i] = NULL;
- }
- }
+ rte_pktmbuf_free_bulk(rx_pkt_burst, burst_size);
/* reset bonding device stats */
rte_eth_stats_reset(test_params->bonding_port_id);
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.017864407 +0800
+++ 0088-test-bonding-fix-loop-on-members.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From 112ce3917674b7e316776305d7e27778d17eb1b7 Mon Sep 17 00:00:00 2001
+From 7f673c3b33515090cab7d1d15d0f6459a3944b40 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 112ce3917674b7e316776305d7e27778d17eb1b7 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/bonding: fix MAC address comparison' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (87 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/bonding: fix loop on members' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/security: fix IPv6 extension loop' " Xueming Li
` (7 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Xueming Li, Chengwen Feng, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=2f96cd1d38b768bfeb06277ceb520709356aaece
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 2f96cd1d38b768bfeb06277ceb520709356aaece Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:23 -0800
Subject: [PATCH] test/bonding: fix MAC address comparison
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f7f85632daf6d6f525d443f90a0ac3c8a3e40b72 ]
The first argument of 'memcmp' function was equal to the second argument.
Therefore ASSERT would always be true.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: 92073ef961ee ("bond: unit tests")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test_link_bonding.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test/test_link_bonding.c b/app/test/test_link_bonding.c
index 805613d7dd..b752a5ecbf 100644
--- a/app/test/test_link_bonding.c
+++ b/app/test/test_link_bonding.c
@@ -792,7 +792,7 @@ test_set_primary_member(void)
&read_mac_addr),
"Failed to get mac address (port %d)",
test_params->bonding_port_id);
- TEST_ASSERT_SUCCESS(memcmp(&read_mac_addr, &read_mac_addr,
+ TEST_ASSERT_SUCCESS(memcmp(expected_mac_addr, &read_mac_addr,
sizeof(read_mac_addr)),
"bonding port mac address not set to that of primary port\n");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.044001707 +0800
+++ 0089-test-bonding-fix-MAC-address-comparison.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From f7f85632daf6d6f525d443f90a0ac3c8a3e40b72 Mon Sep 17 00:00:00 2001
+From 2f96cd1d38b768bfeb06277ceb520709356aaece Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f7f85632daf6d6f525d443f90a0ac3c8a3e40b72 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/security: fix IPv6 extension loop' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (88 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/bonding: fix MAC address comparison' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/event: avoid duplicate initialization' " Xueming Li
` (6 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Xueming Li, Bruce Richardson, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4227ec424b9bfa85301d65bfc0928ea9c903e597
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4227ec424b9bfa85301d65bfc0928ea9c903e597 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:24 -0800
Subject: [PATCH] test/security: fix IPv6 extension loop
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 0151b80786ebbc62f0ead73bd4708665228a093d ]
The parenthesis were in the wrong place so that comparison
took precedence over assignment in handling IPv6 extension
headers. Break up the loop condition to avoid the problem.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: 15ccc647526e ("test/security: test inline reassembly with multi-segment")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/test/test_security_inline_proto_vectors.h | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/app/test/test_security_inline_proto_vectors.h b/app/test/test_security_inline_proto_vectors.h
index 3ac75588a3..0b4093d19a 100644
--- a/app/test/test_security_inline_proto_vectors.h
+++ b/app/test/test_security_inline_proto_vectors.h
@@ -498,10 +498,12 @@ test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
if (extra_data_sum) {
proto = hdr->proto;
p += sizeof(struct rte_ipv6_hdr);
- while (proto != IPPROTO_FRAGMENT &&
- (proto = rte_ipv6_get_next_ext(p, proto, &ext_len) >= 0))
+ while (proto != IPPROTO_FRAGMENT) {
+ proto = rte_ipv6_get_next_ext(p, proto, &ext_len);
+ if (proto < 0)
+ break;
p += ext_len;
-
+ }
/* Found fragment header, update the frag offset */
if (proto == IPPROTO_FRAGMENT) {
frag_ext = (struct rte_ipv6_fragment_ext *)p;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.070377807 +0800
+++ 0090-test-security-fix-IPv6-extension-loop.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From 0151b80786ebbc62f0ead73bd4708665228a093d Mon Sep 17 00:00:00 2001
+From 4227ec424b9bfa85301d65bfc0928ea9c903e597 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 0151b80786ebbc62f0ead73bd4708665228a093d ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index b3d724bac6..86dfa54777 100644
+index 3ac75588a3..0b4093d19a 100644
@@ -25 +27 @@
-@@ -519,10 +519,12 @@ test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
+@@ -498,10 +498,12 @@ test_vector_payload_populate(struct ip_reassembly_test_packet *pkt,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/event: avoid duplicate initialization' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (89 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/security: fix IPv6 extension loop' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/eal: fix loop coverage for alignment macros' " Xueming Li
` (5 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Xueming Li, Bruce Richardson, Abhinandan Gujjar, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=30e22a764cf9edbdcb4fd394d08d720b5ad79052
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 30e22a764cf9edbdcb4fd394d08d720b5ad79052 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:25 -0800
Subject: [PATCH] test/event: avoid duplicate initialization
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 8c08b10d047ac64fb98709871b192698663af7d7 ]
The event_dev_config initialization had duplicate assignments
to the same element. Change to use structure initialization
so that compiler will catch this type of bug.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: f8f9d233ea0e ("test/eventdev: add unit tests")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Abhinandan Gujjar <abhinandan.gujjar@intel.com>
---
app/test/test_event_crypto_adapter.c | 24 ++++++++++--------------
1 file changed, 10 insertions(+), 14 deletions(-)
diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c
index 0c56744ba0..e733df738e 100644
--- a/app/test/test_event_crypto_adapter.c
+++ b/app/test/test_event_crypto_adapter.c
@@ -1154,21 +1154,17 @@ configure_cryptodev(void)
static inline void
evdev_set_conf_values(struct rte_event_dev_config *dev_conf,
- struct rte_event_dev_info *info)
+ const struct rte_event_dev_info *info)
{
- memset(dev_conf, 0, sizeof(struct rte_event_dev_config));
- dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns;
- dev_conf->nb_event_ports = NB_TEST_PORTS;
- dev_conf->nb_event_queues = NB_TEST_QUEUES;
- dev_conf->nb_event_queue_flows = info->max_event_queue_flows;
- dev_conf->nb_event_port_dequeue_depth =
- info->max_event_port_dequeue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_event_port_enqueue_depth =
- info->max_event_port_enqueue_depth;
- dev_conf->nb_events_limit =
- info->max_num_events;
+ *dev_conf = (struct rte_event_dev_config) {
+ .dequeue_timeout_ns = info->min_dequeue_timeout_ns,
+ .nb_event_ports = NB_TEST_PORTS,
+ .nb_event_queues = NB_TEST_QUEUES,
+ .nb_event_queue_flows = info->max_event_queue_flows,
+ .nb_event_port_dequeue_depth = info->max_event_port_dequeue_depth,
+ .nb_event_port_enqueue_depth = info->max_event_port_enqueue_depth,
+ .nb_events_limit = info->max_num_events,
+ };
}
static int
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.096016608 +0800
+++ 0091-test-event-avoid-duplicate-initialization.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From 8c08b10d047ac64fb98709871b192698663af7d7 Mon Sep 17 00:00:00 2001
+From 30e22a764cf9edbdcb4fd394d08d720b5ad79052 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 8c08b10d047ac64fb98709871b192698663af7d7 ]
@@ -12 +14,0 @@
-Cc: stable@dpdk.org
@@ -22 +24 @@
-index 9d38a66bfa..ab24e30a97 100644
+index 0c56744ba0..e733df738e 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/eal: fix loop coverage for alignment macros' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (90 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/event: avoid duplicate initialization' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'test/eal: fix lcore check' " Xueming Li
` (4 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Bruce Richardson, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=d3a710498373cfe91bbd48ad335fe4bc9c076935
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From d3a710498373cfe91bbd48ad335fe4bc9c076935 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:27 -0800
Subject: [PATCH] test/eal: fix loop coverage for alignment macros
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit b3e64fe596a3117edf6d3a79a6c5238a9b92dc4f ]
The test loop was much shorter than desired because when
MAX_NUM is defined with out paren's the divide operator /
takes precedence over shift.
But when MAX_NUM is fixed, some tests take too long
and have to be modified to avoid running over full N^2
space of 1<<20.
Note: this is a very old bug, goes back to 2013.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: 1fb8b07ee511 ("app: add some tests")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
---
app/test/test_common.c | 31 +++++++++++++++++--------------
1 file changed, 17 insertions(+), 14 deletions(-)
diff --git a/app/test/test_common.c b/app/test/test_common.c
index 21eb2285e1..6dbd7fc9a9 100644
--- a/app/test/test_common.c
+++ b/app/test/test_common.c
@@ -9,11 +9,12 @@
#include <rte_common.h>
#include <rte_bitops.h>
#include <rte_hexdump.h>
+#include <rte_random.h>
#include <rte_pause.h>
#include "test.h"
-#define MAX_NUM 1 << 20
+#define MAX_NUM (1 << 20)
#define FAIL(x)\
{printf(x "() test failed!\n");\
@@ -218,19 +219,21 @@ test_align(void)
}
}
- for (p = 1; p <= MAX_NUM / 2; p++) {
- for (i = 1; i <= MAX_NUM / 2; i++) {
- val = RTE_ALIGN_MUL_CEIL(i, p);
- if (val % p != 0 || val < i)
- FAIL_ALIGN("RTE_ALIGN_MUL_CEIL", i, p);
- val = RTE_ALIGN_MUL_FLOOR(i, p);
- if (val % p != 0 || val > i)
- FAIL_ALIGN("RTE_ALIGN_MUL_FLOOR", i, p);
- val = RTE_ALIGN_MUL_NEAR(i, p);
- if (val % p != 0 || ((val != RTE_ALIGN_MUL_CEIL(i, p))
- & (val != RTE_ALIGN_MUL_FLOOR(i, p))))
- FAIL_ALIGN("RTE_ALIGN_MUL_NEAR", i, p);
- }
+ /* testing the whole space of 2^20^2 takes too long. */
+ for (j = 1; j <= MAX_NUM ; j++) {
+ i = rte_rand_max(MAX_NUM - 1) + 1;
+ p = rte_rand_max(MAX_NUM - 1) + 1;
+
+ val = RTE_ALIGN_MUL_CEIL(i, p);
+ if (val % p != 0 || val < i)
+ FAIL_ALIGN("RTE_ALIGN_MUL_CEIL", i, p);
+ val = RTE_ALIGN_MUL_FLOOR(i, p);
+ if (val % p != 0 || val > i)
+ FAIL_ALIGN("RTE_ALIGN_MUL_FLOOR", i, p);
+ val = RTE_ALIGN_MUL_NEAR(i, p);
+ if (val % p != 0 || ((val != RTE_ALIGN_MUL_CEIL(i, p))
+ & (val != RTE_ALIGN_MUL_FLOOR(i, p))))
+ FAIL_ALIGN("RTE_ALIGN_MUL_NEAR", i, p);
}
return 0;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.120887608 +0800
+++ 0092-test-eal-fix-loop-coverage-for-alignment-macros.patch 2024-12-06 23:26:44.103044826 +0800
@@ -1 +1 @@
-From b3e64fe596a3117edf6d3a79a6c5238a9b92dc4f Mon Sep 17 00:00:00 2001
+From d3a710498373cfe91bbd48ad335fe4bc9c076935 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit b3e64fe596a3117edf6d3a79a6c5238a9b92dc4f ]
@@ -18 +20,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'test/eal: fix lcore check' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (91 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/eal: fix loop coverage for alignment macros' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'app/testpmd: remove redundant policy action condition' " Xueming Li
` (3 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Bruce Richardson, Aaron Conole, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=e5eb4aee76752662d9af909ddc0a991ae3ae5235
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From e5eb4aee76752662d9af909ddc0a991ae3ae5235 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:28 -0800
Subject: [PATCH] test/eal: fix lcore check
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 357f915ef5e1280d921fb103ea33066e7a888ed2 ]
The expression for checking which lcore is enabled for 0-7
was wrong (missing case for 6).
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: b0209034f2bb ("test/eal: check number of cores before running subtests")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Aaron Conole <aconole@redhat.com>
---
app/test/test_eal_flags.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/app/test/test_eal_flags.c b/app/test/test_eal_flags.c
index 6cb4b06757..767cf1481c 100644
--- a/app/test/test_eal_flags.c
+++ b/app/test/test_eal_flags.c
@@ -677,8 +677,8 @@ test_missing_c_flag(void)
if (rte_lcore_is_enabled(0) && rte_lcore_is_enabled(1) &&
rte_lcore_is_enabled(2) && rte_lcore_is_enabled(3) &&
- rte_lcore_is_enabled(3) && rte_lcore_is_enabled(5) &&
- rte_lcore_is_enabled(4) && rte_lcore_is_enabled(7) &&
+ rte_lcore_is_enabled(4) && rte_lcore_is_enabled(5) &&
+ rte_lcore_is_enabled(6) && rte_lcore_is_enabled(7) &&
launch_proc(argv29) != 0) {
printf("Error - "
"process did not run ok with valid corelist value\n");
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.147901508 +0800
+++ 0093-test-eal-fix-lcore-check.patch 2024-12-06 23:26:44.113044826 +0800
@@ -1 +1 @@
-From 357f915ef5e1280d921fb103ea33066e7a888ed2 Mon Sep 17 00:00:00 2001
+From e5eb4aee76752662d9af909ddc0a991ae3ae5235 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 357f915ef5e1280d921fb103ea33066e7a888ed2 ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index d37d6b8627..e32f83d3c8 100644
+index 6cb4b06757..767cf1481c 100644
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/testpmd: remove redundant policy action condition' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (92 preceding siblings ...)
2024-12-07 8:00 ` patch 'test/eal: fix lcore check' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'app/testpmd: avoid potential outside of array reference' " Xueming Li
` (2 subsequent siblings)
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Xueming Li, Bruce Richardson, Ajit Khaparde, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=1fb0641a6dbe0f1210756dd4ba5d538c27d24132
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 1fb0641a6dbe0f1210756dd4ba5d538c27d24132 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:29 -0800
Subject: [PATCH] app/testpmd: remove redundant policy action condition
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 4c2e7468426ae6be3f2a8f2d15e7d1222083eb9d ]
The loop over policy actions will always exit when it sees
the flow end action, so the next check is redundant.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: f29fa2c59b85 ("app/testpmd: support policy actions per color")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/test-pmd/config.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ff2e9da324..9c3d668e56 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -2219,7 +2219,7 @@ port_meter_policy_add(portid_t port_id, uint32_t policy_id,
for (act_n = 0, start = act;
act->type != RTE_FLOW_ACTION_TYPE_END; act++)
act_n++;
- if (act_n && act->type == RTE_FLOW_ACTION_TYPE_END)
+ if (act_n > 0)
policy.actions[i] = start;
else
policy.actions[i] = NULL;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.175511108 +0800
+++ 0094-app-testpmd-remove-redundant-policy-action-condition.patch 2024-12-06 23:26:44.113044826 +0800
@@ -1 +1 @@
-From 4c2e7468426ae6be3f2a8f2d15e7d1222083eb9d Mon Sep 17 00:00:00 2001
+From 1fb0641a6dbe0f1210756dd4ba5d538c27d24132 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 4c2e7468426ae6be3f2a8f2d15e7d1222083eb9d ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -18,2 +20,2 @@
- app/test-pmd/config.c | 3 +--
- 1 file changed, 1 insertion(+), 2 deletions(-)
+ app/test-pmd/config.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
@@ -22 +24 @@
-index 28d45568ac..4e7fb69183 100644
+index ff2e9da324..9c3d668e56 100644
@@ -25 +27 @@
-@@ -2288,7 +2288,7 @@ port_meter_policy_add(portid_t port_id, uint32_t policy_id,
+@@ -2219,7 +2219,7 @@ port_meter_policy_add(portid_t port_id, uint32_t policy_id,
@@ -34,5 +35,0 @@
-@@ -7338,4 +7338,3 @@ show_mcast_macs(portid_t port_id)
- printf(" %s\n", buf);
- }
- }
--
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'app/testpmd: avoid potential outside of array reference' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (93 preceding siblings ...)
2024-12-07 8:00 ` patch 'app/testpmd: remove redundant policy action condition' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'doc: correct definition of stats per queue feature' " Xueming Li
2024-12-07 8:00 ` patch 'devtools: fix check of multiple commits fixed at once' " Xueming Li
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger
Cc: Xueming Li, Bruce Richardson, Chengwen Feng, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=747ba3671e05f61b7113fb767a1394a6fb34a5e8
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 747ba3671e05f61b7113fb767a1394a6fb34a5e8 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 21 Nov 2024 10:23:30 -0800
Subject: [PATCH] app/testpmd: avoid potential outside of array reference
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit f86085caab0c6c5dc630b9d6ad20d1c728e7703e ]
The order of comparison is wrong, and potentially allows
referencing past the array.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/
Fixes: 3e3edab530a1 ("ethdev: add flow quota")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---
app/test-pmd/cmdline_flow.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/app/test-pmd/cmdline_flow.c b/app/test-pmd/cmdline_flow.c
index 4b13d84ad1..661c72c7ef 100644
--- a/app/test-pmd/cmdline_flow.c
+++ b/app/test-pmd/cmdline_flow.c
@@ -12111,7 +12111,7 @@ comp_names_to_index(struct context *ctx, const struct token *token,
RTE_SET_USED(token);
if (!buf)
return names_size;
- if (names[ent] && ent < names_size)
+ if (ent < names_size && names[ent] != NULL)
return rte_strscpy(buf, names[ent], size);
return -1;
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.208956809 +0800
+++ 0095-app-testpmd-avoid-potential-outside-of-array-referen.patch 2024-12-06 23:26:44.113044826 +0800
@@ -1 +1 @@
-From f86085caab0c6c5dc630b9d6ad20d1c728e7703e Mon Sep 17 00:00:00 2001
+From 747ba3671e05f61b7113fb767a1394a6fb34a5e8 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit f86085caab0c6c5dc630b9d6ad20d1c728e7703e ]
@@ -11 +13,0 @@
-Cc: stable@dpdk.org
@@ -21 +23 @@
-index 1e4f2ebc55..9e4fc2d95d 100644
+index 4b13d84ad1..661c72c7ef 100644
@@ -24 +26 @@
-@@ -12892,7 +12892,7 @@ comp_names_to_index(struct context *ctx, const struct token *token,
+@@ -12111,7 +12111,7 @@ comp_names_to_index(struct context *ctx, const struct token *token,
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'doc: correct definition of stats per queue feature' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (94 preceding siblings ...)
2024-12-07 8:00 ` patch 'app/testpmd: avoid potential outside of array reference' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
2024-12-07 8:00 ` patch 'devtools: fix check of multiple commits fixed at once' " Xueming Li
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Stephen Hemminger; +Cc: Xueming Li, Ferruh Yigit, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=4b216d36bd1c83e1e82ac46129de26cbb8f9ac99
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 4b216d36bd1c83e1e82ac46129de26cbb8f9ac99 Mon Sep 17 00:00:00 2001
From: Stephen Hemminger <stephen@networkplumber.org>
Date: Thu, 10 Oct 2024 18:38:27 -0700
Subject: [PATCH] doc: correct definition of stats per queue feature
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit 71eae7fe3eac90b70200460c714d1c13ee43dc25 ]
Change the documentation to match current usage of this feature
in the NIC table. Moved this sub heading to be after basic
stats because the queue stats reported now are in the same structure.
Although the "Stats per Queue" feature was originally intended
to be related to stats mapping, the overwhelming majority of drivers
report this feature with a different meaning.
Hopefully in later release the per-queue stats limitations
can be fixed, but this requires and API, ABI, and lots of driver
changes.
Fixes: dad1ec72a377 ("doc: document NIC features")
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
---
doc/guides/nics/features.rst | 34 ++++++++++++++++++++--------------
1 file changed, 20 insertions(+), 14 deletions(-)
diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index cf9fabb8b8..7b48e4e991 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -716,14 +716,32 @@ Basic stats
Support basic statistics such as: ipackets, opackets, ibytes, obytes,
imissed, ierrors, oerrors, rx_nombuf.
-And per queue stats: q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors.
-
These apply to all drivers.
* **[implements] eth_dev_ops**: ``stats_get``, ``stats_reset``.
* **[related] API**: ``rte_eth_stats_get``, ``rte_eth_stats_reset()``.
+.. _nic_features_stats_per_queue:
+
+Stats per queue
+---------------
+
+Supports per queue stats: q_ipackets, q_opackets, q_ibytes, q_obytes, q_errors.
+Statistics only supplied for first ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` (16) queues.
+If driver does not support this feature the per queue stats will be zero.
+
+* **[implements] eth_dev_ops**: ``stats_get``, ``stats_reset``.
+* **[related] API**: ``rte_eth_stats_get``, ``rte_eth_stats_reset()``.
+
+May also support configuring per-queue stat counter mapping.
+Used by some drivers to workaround HW limitations.
+
+* **[implements] eth_dev_ops**: ``queue_stats_mapping_set``.
+* **[related] API**: ``rte_eth_dev_set_rx_queue_stats_mapping()``,
+ ``rte_eth_dev_set_tx_queue_stats_mapping()``.
+
+
.. _nic_features_extended_stats:
Extended stats
@@ -738,18 +756,6 @@ Supports Extended Statistics, changes from driver to driver.
``rte_eth_xstats_get_names_by_id()``, ``rte_eth_xstats_get_id_by_name()``.
-.. _nic_features_stats_per_queue:
-
-Stats per queue
----------------
-
-Supports configuring per-queue stat counter mapping.
-
-* **[implements] eth_dev_ops**: ``queue_stats_mapping_set``.
-* **[related] API**: ``rte_eth_dev_set_rx_queue_stats_mapping()``,
- ``rte_eth_dev_set_tx_queue_stats_mapping()``.
-
-
.. _nic_features_congestion_management:
Congestion management
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.249632509 +0800
+++ 0096-doc-correct-definition-of-stats-per-queue-feature.patch 2024-12-06 23:26:44.123044826 +0800
@@ -1 +1 @@
-From 71eae7fe3eac90b70200460c714d1c13ee43dc25 Mon Sep 17 00:00:00 2001
+From 4b216d36bd1c83e1e82ac46129de26cbb8f9ac99 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit 71eae7fe3eac90b70200460c714d1c13ee43dc25 ]
@@ -19 +21,0 @@
-Cc: stable@dpdk.org
@@ -28 +30 @@
-index 0508f118fe..8bd448139e 100644
+index cf9fabb8b8..7b48e4e991 100644
@@ -31 +33 @@
-@@ -729,14 +729,32 @@ Basic stats
+@@ -716,14 +716,32 @@ Basic stats
@@ -66 +68 @@
-@@ -751,18 +769,6 @@ Supports Extended Statistics, changes from driver to driver.
+@@ -738,18 +756,6 @@ Supports Extended Statistics, changes from driver to driver.
^ permalink raw reply [flat|nested] 230+ messages in thread
* patch 'devtools: fix check of multiple commits fixed at once' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patches " Xueming Li
` (95 preceding siblings ...)
2024-12-07 8:00 ` patch 'doc: correct definition of stats per queue feature' " Xueming Li
@ 2024-12-07 8:00 ` Xueming Li
96 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-07 8:00 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: Xueming Li, Luca Boccassi, Kevin Traynor, dpdk stable
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://git.dpdk.org/dpdk-stable/log/?h=23.11-staging
This queued commit can be viewed at:
https://git.dpdk.org/dpdk-stable/commit/?h=23.11-staging&id=87f65fa28ad09345f55e58e35e2c856ff4fc8bc2
Thanks.
Xueming Li <xuemingl@nvidia.com>
---
From 87f65fa28ad09345f55e58e35e2c856ff4fc8bc2 Mon Sep 17 00:00:00 2001
From: Thomas Monjalon <thomas@monjalon.net>
Date: Tue, 18 Apr 2023 16:07:25 +0200
Subject: [PATCH] devtools: fix check of multiple commits fixed at once
Cc: Xueming Li <xuemingl@nvidia.com>
[ upstream commit dbee69686b63fab960a98295a7de542d45de9b6d ]
When looking for fixes to backport,
only the first origin commit hash (from "Fixes:") was checked.
There is very little chance that the next commits being fixed
have a wrong hash in the commit log of the fix,
but it is fixed by checking them all before proceeding further.
Fixes: 752d8e097ec1 ("scripts: show fixes with release version of bug")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Luca Boccassi <bluca@debian.org>
Acked-by: Kevin Traynor <ktraynor@redhat.com>
---
devtools/git-log-fixes.sh | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/devtools/git-log-fixes.sh b/devtools/git-log-fixes.sh
index 8a4a8470c2..4690dd4545 100755
--- a/devtools/git-log-fixes.sh
+++ b/devtools/git-log-fixes.sh
@@ -68,7 +68,7 @@ origin_version () # <origin_hash> ...
{
for origin in $* ; do
# check hash is valid
- git rev-parse -q --verify $1 >&- || continue
+ git rev-parse -q --verify $origin >&- || continue
# get version of this bug origin
local origver=$(commit_version $origin)
local roothashes="$(origin_filter $origin)"
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:47.280094210 +0800
+++ 0097-devtools-fix-check-of-multiple-commits-fixed-at-once.patch 2024-12-06 23:26:44.123044826 +0800
@@ -1 +1 @@
-From dbee69686b63fab960a98295a7de542d45de9b6d Mon Sep 17 00:00:00 2001
+From 87f65fa28ad09345f55e58e35e2c856ff4fc8bc2 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com>
+
+[ upstream commit dbee69686b63fab960a98295a7de542d45de9b6d ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org
^ permalink raw reply [flat|nested] 230+ messages in thread
* RE: [EXTERNAL] patch 'net/cnxk: fix build on Ubuntu 24.04' has been queued to stable release 23.11.3
2024-12-07 8:00 ` patch 'net/cnxk: " Xueming Li
@ 2024-12-09 5:42 ` Sunil Kumar Kori
2024-12-09 5:42 ` Sunil Kumar Kori
1 sibling, 0 replies; 230+ messages in thread
From: Sunil Kumar Kori @ 2024-12-09 5:42 UTC (permalink / raw)
To: Xueming Li; +Cc: dpdk stable
[-- Attachment #1: Type: text/plain, Size: 5618 bytes --]
From: Xueming Li <xuemingl@nvidia.com>
Sent: Saturday, December 7, 2024 1:30 PM
To: Sunil Kumar Kori <skori@marvell.com>
Cc: Xueming Li <xuemingl@nvidia.com>; dpdk stable <stable@dpdk.org>
Subject: [EXTERNAL] patch 'net/cnxk: fix build on Ubuntu 24.04' has been queued to stable release 23.11.3
Hi, FYI, your patch has been queued to stable release 23. 11. 3 Note it hasn't been pushed to https: //urldefense. proofpoint. com/v2/url?u=http-3A__dpdk. org_browse_dpdk-2Dstable&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=OCDHLcgcUmN7VEduKzegMzUsAaEig04Be9Y1usX-Fq8&e=
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to https://urldefense.proofpoint.com/v2/url?u=http-3A__dpdk.org_browse_dpdk-2Dstable&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=OCDHLcgcUmN7VEduKzegMzUsAaEig04Be9Y1usX-Fq8&e= yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://urldefense.proofpoint.com/v2/url?u=https-3A__git.dpdk.org_dpdk-2Dstable_log_-3Fh-3D23.11-2Dstaging&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=L5QXg_lPQSUvFkAmREdMj9EWsY7ZdwN56iEbwS4fW1k&e=
This queued commit can be viewed at:
https://urldefense.proofpoint.com/v2/url?u=https-3A__git.dpdk.org_dpdk-2Dstable_commit_-3Fh-3D23.11-2Dstaging-26id-3Dd39e6e67897cafd530b554985801a9f8f7092012&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=2lph2omTzKv9yp2MuPsSAgKzj_eaN6Q0SUgxbKaj6H8&e=
Thanks.
Xueming Li <xuemingl@nvidia.com<mailto:xuemingl@nvidia.com>>
Acked-by: Sunil Kumar Kori <skori@marvell.com>
---
From d39e6e67897cafd530b554985801a9f8f7092012 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com<mailto:skori@marvell.com>>
Date: Thu, 14 Nov 2024 13:08:16 +0530
Subject: [PATCH] net/cnxk: fix build on Ubuntu 24.04
Cc: Xueming Li <xuemingl@nvidia.com<mailto:xuemingl@nvidia.com>>
[ upstream commit b9799fb5e7a38c824c91b88d3c89250d23c783e6 ]
Due to implicit unsigned to signed integer conversion, actual value gets
wrapped and becomes higher than its size.
Bugzilla ID: 1513
Fixes: 03b152389fb1 ("net/cnxk: add option to enable custom inbound SA")
Fixes: 7df4ead35436 ("net/cnxk: support parsing custom SA action")
Fixes: 47cca253d605 ("net/cnxk: support Rx inject")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com<mailto:skori@marvell.com>>
---
drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index a0e9300cff..8c022e5f08 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -303,8 +303,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
uint16_t custom_sa_act = 0;
struct rte_kvargs *kvlist;
uint32_t meta_buf_sz = 0;
+ uint16_t lock_rx_ctx = 0;
uint16_t no_inl_dev = 0;
- uint8_t lock_rx_ctx = 0;
memset(&sdp_chan, 0, sizeof(sdp_chan));
memset(&pre_l2_info, 0, sizeof(struct flow_pre_l2_size_info));
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.164810003 +0800
+++ 0063-net-cnxk-fix-build-on-Ubuntu-24.04.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From b9799fb5e7a38c824c91b88d3c89250d23c783e6 Mon Sep 17 00:00:00 2001
+From d39e6e67897cafd530b554985801a9f8f7092012 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com<mailto:xuemingl@nvidia.com>>
+
+[ upstream commit b9799fb5e7a38c824c91b88d3c89250d23c783e6 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org<mailto:stable@dpdk.org>
@@ -17,2 +19,2 @@
- drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
+ drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
@@ -21 +23 @@
-index 5bd50bb9a1..ecc2ea8b77 100644
+index a0e9300cff..8c022e5f08 100644
@@ -24,3 +26 @@
-@@ -305,12 +305,12 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
- uint16_t scalar_enable = 0;
- uint16_t tx_compl_ena = 0;
+@@ -303,8 +303,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
@@ -28,2 +27,0 @@
-- uint8_t custom_inb_sa = 0;
-+ uint16_t custom_inb_sa = 0;
@@ -33 +30,0 @@
-+ uint16_t rx_inj_ena = 0;
@@ -36 +32,0 @@
-- uint8_t rx_inj_ena = 0;
[-- Attachment #2: Type: text/html, Size: 20401 bytes --]
^ permalink raw reply [flat|nested] 230+ messages in thread
* RE: [EXTERNAL] patch 'net/cnxk: fix build on Ubuntu 24.04' has been queued to stable release 23.11.3
2024-12-07 8:00 ` patch 'net/cnxk: " Xueming Li
2024-12-09 5:42 ` [EXTERNAL] " Sunil Kumar Kori
@ 2024-12-09 5:42 ` Sunil Kumar Kori
1 sibling, 0 replies; 230+ messages in thread
From: Sunil Kumar Kori @ 2024-12-09 5:42 UTC (permalink / raw)
To: Xueming Li; +Cc: dpdk stable
[-- Attachment #1: Type: text/plain, Size: 5642 bytes --]
From: Xueming Li <xuemingl@nvidia.com>
Sent: Saturday, December 7, 2024 1:30 PM
To: Sunil Kumar Kori <skori@marvell.com>
Cc: Xueming Li <xuemingl@nvidia.com>; dpdk stable <stable@dpdk.org>
Subject: [EXTERNAL] patch 'net/cnxk: fix build on Ubuntu 24.04' has been queued to stable release 23.11.3
Hi, FYI, your patch has been queued to stable release 23. 11. 3 Note it hasn't been pushed to https: //urldefense. proofpoint. com/v2/url?u=http-3A__dpdk. org_browse_dpdk-2Dstable&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=OCDHLcgcUmN7VEduKzegMzUsAaEig04Be9Y1usX-Fq8&e=
Hi,
FYI, your patch has been queued to stable release 23.11.3
Note it hasn't been pushed to https://urldefense.proofpoint.com/v2/url?u=http-3A__dpdk.org_browse_dpdk-2Dstable&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=OCDHLcgcUmN7VEduKzegMzUsAaEig04Be9Y1usX-Fq8&e= yet.
It will be pushed if I get no objections before 12/10/24. So please
shout if anyone has objections.
Also note that after the patch there's a diff of the upstream commit vs the
patch applied to the branch. This will indicate if there was any rebasing
needed to apply to the stable branch. If there were code changes for rebasing
(ie: not only metadata diffs), please double check that the rebase was
correctly done.
Queued patches are on a temporary branch at:
https://urldefense.proofpoint.com/v2/url?u=https-3A__git.dpdk.org_dpdk-2Dstable_log_-3Fh-3D23.11-2Dstaging&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=L5QXg_lPQSUvFkAmREdMj9EWsY7ZdwN56iEbwS4fW1k&e=
This queued commit can be viewed at:
https://urldefense.proofpoint.com/v2/url?u=https-3A__git.dpdk.org_dpdk-2Dstable_commit_-3Fh-3D23.11-2Dstaging-26id-3Dd39e6e67897cafd530b554985801a9f8f7092012&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=dXeXaAMkP5COgn1zxHMyaF1_d9IIuq6vHQO6NrIPjaE&m=epjUCFVoFQjLZ3vMP9pCD5p-36hdkRRf0lU2WbZqd3SsSrMTOG-SsVTcZVwF1lFp&s=2lph2omTzKv9yp2MuPsSAgKzj_eaN6Q0SUgxbKaj6H8&e=
Thanks.
Xueming Li <xuemingl@nvidia.com<mailto:xuemingl@nvidia.com>>
Acked-by: Sunil Kumar Kori skori@marvell.com<mailto:skori@marvell.com>
---
From d39e6e67897cafd530b554985801a9f8f7092012 Mon Sep 17 00:00:00 2001
From: Sunil Kumar Kori <skori@marvell.com<mailto:skori@marvell.com>>
Date: Thu, 14 Nov 2024 13:08:16 +0530
Subject: [PATCH] net/cnxk: fix build on Ubuntu 24.04
Cc: Xueming Li <xuemingl@nvidia.com<mailto:xuemingl@nvidia.com>>
[ upstream commit b9799fb5e7a38c824c91b88d3c89250d23c783e6 ]
Due to implicit unsigned to signed integer conversion, actual value gets
wrapped and becomes higher than its size.
Bugzilla ID: 1513
Fixes: 03b152389fb1 ("net/cnxk: add option to enable custom inbound SA")
Fixes: 7df4ead35436 ("net/cnxk: support parsing custom SA action")
Fixes: 47cca253d605 ("net/cnxk: support Rx inject")
Signed-off-by: Sunil Kumar Kori <skori@marvell.com<mailto:skori@marvell.com>>
---
drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/net/cnxk/cnxk_ethdev_devargs.c b/drivers/net/cnxk/cnxk_ethdev_devargs.c
index a0e9300cff..8c022e5f08 100644
--- a/drivers/net/cnxk/cnxk_ethdev_devargs.c
+++ b/drivers/net/cnxk/cnxk_ethdev_devargs.c
@@ -303,8 +303,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
uint16_t custom_sa_act = 0;
struct rte_kvargs *kvlist;
uint32_t meta_buf_sz = 0;
+ uint16_t lock_rx_ctx = 0;
uint16_t no_inl_dev = 0;
- uint8_t lock_rx_ctx = 0;
memset(&sdp_chan, 0, sizeof(sdp_chan));
memset(&pre_l2_info, 0, sizeof(struct flow_pre_l2_size_info));
--
2.34.1
---
Diff of the applied patch vs upstream commit (please double-check if non-empty:
---
--- - 2024-12-06 23:26:46.164810003 +0800
+++ 0063-net-cnxk-fix-build-on-Ubuntu-24.04.patch 2024-12-06 23:26:43.983044827 +0800
@@ -1 +1 @@
-From b9799fb5e7a38c824c91b88d3c89250d23c783e6 Mon Sep 17 00:00:00 2001
+From d39e6e67897cafd530b554985801a9f8f7092012 Mon Sep 17 00:00:00 2001
@@ -4,0 +5,3 @@
+Cc: Xueming Li <xuemingl@nvidia.com<mailto:xuemingl@nvidia.com>>
+
+[ upstream commit b9799fb5e7a38c824c91b88d3c89250d23c783e6 ]
@@ -13 +15,0 @@
-Cc: stable@dpdk.org<mailto:stable@dpdk.org>
@@ -17,2 +19,2 @@
- drivers/net/cnxk/cnxk_ethdev_devargs.c | 6 +++---
- 1 file changed, 3 insertions(+), 3 deletions(-)
+ drivers/net/cnxk/cnxk_ethdev_devargs.c | 2 +-
+ 1 file changed, 1 insertion(+), 1 deletion(-)
@@ -21 +23 @@
-index 5bd50bb9a1..ecc2ea8b77 100644
+index a0e9300cff..8c022e5f08 100644
@@ -24,3 +26 @@
-@@ -305,12 +305,12 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
- uint16_t scalar_enable = 0;
- uint16_t tx_compl_ena = 0;
+@@ -303,8 +303,8 @@ cnxk_ethdev_parse_devargs(struct rte_devargs *devargs, struct cnxk_eth_dev *dev)
@@ -28,2 +27,0 @@
-- uint8_t custom_inb_sa = 0;
-+ uint16_t custom_inb_sa = 0;
@@ -33 +30,0 @@
-+ uint16_t rx_inj_ena = 0;
@@ -36 +32,0 @@
-- uint8_t rx_inj_ena = 0;
[-- Attachment #2: Type: text/html, Size: 20432 bytes --]
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'eal/unix: optimize thread creation' has been queued to stable release 23.11.3
2024-12-07 7:59 ` patch 'eal/unix: optimize thread creation' " Xueming Li
@ 2024-12-09 7:00 ` David Marchand
2024-12-09 8:04 ` Xueming Li
0 siblings, 1 reply; 230+ messages in thread
From: David Marchand @ 2024-12-09 7:00 UTC (permalink / raw)
To: Xueming Li; +Cc: Luca Boccassi, Stephen Hemminger, Chengwen Feng, dpdk stable
Hello Xueming,
On Sat, Dec 7, 2024 at 9:02 AM Xueming Li <xuemingl@nvidia.com> wrote:
> @@ -176,6 +192,14 @@ rte_thread_create(rte_thread_t *thread_id,
> }
> }
>
> +#ifdef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
> + ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
> + (void *)(void *)thread_func, args);
> + if (ret != 0) {
> + RTE_LOG(DEBUG, EAL, "pthread_create failed");
A \n is missing here.
> + goto cleanup;
> + }
> +#else /* !RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP */
> ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
> thread_start_wrapper, &ctx);
> if (ret != 0) {
--
David Marchand
^ permalink raw reply [flat|nested] 230+ messages in thread
* Re: patch 'eal/unix: optimize thread creation' has been queued to stable release 23.11.3
2024-12-09 7:00 ` David Marchand
@ 2024-12-09 8:04 ` Xueming Li
0 siblings, 0 replies; 230+ messages in thread
From: Xueming Li @ 2024-12-09 8:04 UTC (permalink / raw)
To: David Marchand
Cc: Luca Boccassi, Stephen Hemminger, Chengwen Feng, dpdk stable
[-- Attachment #1: Type: text/plain, Size: 1212 bytes --]
Thanks, I'll update.
________________________________
From: David Marchand <david.marchand@redhat.com>
Sent: Monday, December 9, 2024 3:00 PM
To: Xueming Li <xuemingl@nvidia.com>
Cc: Luca Boccassi <bluca@debian.org>; Stephen Hemminger <stephen@networkplumber.org>; Chengwen Feng <fengchengwen@huawei.com>; dpdk stable <stable@dpdk.org>
Subject: Re: patch 'eal/unix: optimize thread creation' has been queued to stable release 23.11.3
Hello Xueming,
On Sat, Dec 7, 2024 at 9:02 AM Xueming Li <xuemingl@nvidia.com> wrote:
> @@ -176,6 +192,14 @@ rte_thread_create(rte_thread_t *thread_id,
> }
> }
>
> +#ifdef RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP
> + ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
> + (void *)(void *)thread_func, args);
> + if (ret != 0) {
> + RTE_LOG(DEBUG, EAL, "pthread_create failed");
A \n is missing here.
> + goto cleanup;
> + }
> +#else /* !RTE_EAL_PTHREAD_ATTR_SETAFFINITY_NP */
> ret = pthread_create((pthread_t *)&thread_id->opaque_id, attrp,
> thread_start_wrapper, &ctx);
> if (ret != 0) {
--
David Marchand
[-- Attachment #2: Type: text/html, Size: 2925 bytes --]
^ permalink raw reply [flat|nested] 230+ messages in thread
end of thread, other threads:[~2024-12-09 8:04 UTC | newest]
Thread overview: 230+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-11-11 6:26 patch has been queued to stable release 23.11.3 Xueming Li
2024-11-11 6:26 ` patch 'bus/vdev: revert fix devargs in secondary process' " Xueming Li
2024-11-11 6:26 ` patch 'log: add a per line log helper' " Xueming Li
2024-11-12 9:02 ` David Marchand
2024-11-12 11:35 ` Xueming Li
2024-11-12 12:47 ` David Marchand
2024-11-12 13:56 ` Xueming Li
2024-11-12 14:09 ` David Marchand
2024-11-12 14:11 ` Xueming Li
2024-11-11 6:26 ` patch 'drivers: remove redundant newline from logs' " Xueming Li
2024-11-11 6:26 ` patch 'eal/x86: fix 32-bit write combining store' " Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix schedule type' " Xueming Li
2024-11-11 6:26 ` patch 'test/event: fix target event queue' " Xueming Li
2024-11-11 6:26 ` patch 'examples/eventdev: fix queue crash with generic pipeline' " Xueming Li
2024-11-11 6:26 ` patch 'crypto/dpaa2_sec: fix memory leak' " Xueming Li
2024-11-11 6:26 ` patch 'common/dpaax/caamflib: fix PDCP SNOW-ZUC watchdog' " Xueming Li
2024-11-11 6:26 ` patch 'dev: fix callback lookup when unregistering device' " Xueming Li
2024-11-11 6:26 ` patch 'crypto/scheduler: fix session size computation' " Xueming Li
2024-11-11 6:26 ` patch 'examples/ipsec-secgw: fix dequeue count from cryptodev' " Xueming Li
2024-11-11 6:26 ` patch 'bpf: fix free function mismatch if convert fails' " Xueming Li
2024-11-11 6:27 ` patch 'baseband/la12xx: fix use after free in modem config' " Xueming Li
2024-11-11 6:27 ` patch 'common/qat: fix use after free in device probe' " Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: fix use after free in mailbox init' " Xueming Li
2024-11-11 6:27 ` patch 'crypto/bcmfs: fix free function mismatch' " Xueming Li
2024-11-11 6:27 ` patch 'dma/idxd: fix free function mismatch in device probe' " Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix free function mismatch in port config' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix use after free in mempool create' " Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix invalid free in JSON parser' " Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix use after free in filter flush' " Xueming Li
2024-11-11 6:27 ` patch 'net/nfp: fix double free in flow destroy' " Xueming Li
2024-11-11 6:27 ` patch 'net/sfc: fix use after free in debug logs' " Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga/base: fix use after free' " Xueming Li
2024-11-11 6:27 ` patch 'raw/ifpga: fix free function mismatch in interrupt config' " Xueming Li
2024-11-11 6:27 ` patch 'examples/vhost: fix free function mismatch' " Xueming Li
2024-11-11 6:27 ` patch 'net/nfb: fix use after free' " Xueming Li
2024-11-11 6:27 ` patch 'power: enable CPPC' " Xueming Li
2024-11-11 6:27 ` patch 'fib6: add runtime checks in AVX512 lookup' " Xueming Li
2024-11-11 6:27 ` patch 'pcapng: fix handling of chained mbufs' " Xueming Li
2024-11-11 6:27 ` patch 'app/dumpcap: fix handling of jumbo frames' " Xueming Li
2024-11-11 6:27 ` patch 'ml/cnxk: fix handling of TVM model I/O' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx timestamp handling for VF' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix Rx offloads to handle timestamp' " Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix Rx timestamp handling' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix MAC address change with active VF' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix inline CTX write' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix CPT HW word size for outbound SA' " Xueming Li
2024-11-11 6:27 ` patch 'net/cnxk: fix OOP handling for inbound packets' " Xueming Li
2024-11-11 6:27 ` patch 'event/cnxk: fix OOP handling in event mode' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix base log level' " Xueming Li
2024-11-11 6:27 ` patch 'common/cnxk: fix IRQ reconfiguration' " Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix access to deallocated mem' " Xueming Li
2024-11-11 6:27 ` patch 'baseband/acc: fix soft output bypass RM' " Xueming Li
2024-11-11 6:27 ` patch 'vhost: fix offset while mapping log base address' " Xueming Li
2024-11-11 6:27 ` patch 'vdpa: update used flags in used ring relay' " Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix hardware initialization' " Xueming Li
2024-11-11 6:27 ` patch 'vdpa/nfp: fix reconfiguration' " Xueming Li
2024-11-11 6:27 ` patch 'net/virtio-user: reset used index counter' " Xueming Li
2024-11-11 6:27 ` patch 'vhost: restrict set max queue pair API to VDUSE' " Xueming Li
2024-11-11 6:27 ` patch 'fib: fix AVX512 lookup' " Xueming Li
2024-11-11 6:27 ` patch 'net/e1000: fix link status crash in secondary process' " Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: add checks for flow action types' " Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: fix crash when link is unstable' " Xueming Li
2024-11-11 6:27 ` patch 'net/cpfl: fix parsing protocol ID mask field' " Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix link speed for 200G' " Xueming Li
2024-11-11 6:27 ` patch 'net/ice/base: fix iteration of TLVs in Preserved Fields Area' " Xueming Li
2024-11-11 6:27 ` patch 'net/ixgbe/base: fix unchecked return value' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix setting flags in init function' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix misleading debug logs and comments' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: add missing X710TL device check' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix blinking X722 with X557 PHY' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix DDP loading with reserved track ID' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix repeated register dumps' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix unchecked return value' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e/base: fix loop bounds' " Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: delay VF reset command' " Xueming Li
2024-11-11 6:27 ` patch 'net/i40e: fix AVX-512 pointer copy on 32-bit' " Xueming Li
2024-11-11 6:27 ` patch 'net/ice: " Xueming Li
2024-11-11 6:27 ` patch 'net/iavf: " Xueming Li
2024-11-11 6:27 ` patch 'common/idpf: " Xueming Li
2024-11-11 6:27 ` patch 'net/gve: fix queue setup and stop' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix Tx for chained mbuf' " Xueming Li
2024-11-11 6:28 ` patch 'net/tap: avoid memcpy with null argument' " Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove unnecessary cast' " Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: set live interface as non-blocking' " Xueming Li
2024-11-11 6:28 ` patch 'net/mana: support rdma-core via pkg-config' " Xueming Li
2024-11-11 6:28 ` patch 'net/ena: revert redefining memcpy' " Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: remove some basic address dump' " Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: fix dump counter of registers' " Xueming Li
2024-11-11 6:28 ` patch 'ethdev: fix overflow in descriptor count' " Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix PFDRs leaks due to FQRNIs' " Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix typecasting channel ID' " Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix VSP for 1G fm1-mac9 and 10' " Xueming Li
2024-11-11 6:28 ` patch 'bus/dpaa: fix the fman details status' " Xueming Li
2024-11-11 6:28 ` patch 'net/dpaa: fix reallocate mbuf handling' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix mbuf allocation memory leak for DQ Rx' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: always attempt Rx refill on DQ' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix type declaration of some variables' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix representor port link status update' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: fix refill logic causing memory corruption' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve: add IO memory barriers before reading descriptors' " Xueming Li
2024-11-11 6:28 ` patch 'net/memif: fix buffer overflow in zero copy Rx' " Xueming Li
2024-11-11 6:28 ` patch 'net/tap: restrict maximum number of MP FDs' " Xueming Li
2024-11-11 6:28 ` patch 'ethdev: verify queue ID in Tx done cleanup' " Xueming Li
2024-11-11 6:28 ` patch 'net/hns3: verify reset type from firmware' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix link change return value' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: fix pause frame setting check' " Xueming Li
2024-11-11 6:28 ` patch 'net/pcap: fix blocking Rx' " Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: add bounds check' " Xueming Li
2024-11-11 6:28 ` patch 'net/ice/base: fix VLAN replay after reset' " Xueming Li
2024-11-11 6:28 ` patch 'net/iavf: preserve MAC address with i40e PF Linux driver' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: workaround list management of Rx queue control' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5/hws: fix flex item as tunnel header' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: add flex item query for tunnel mode' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix number of supported flex parsers' " Xueming Li
2024-11-11 6:28 ` patch 'app/testpmd: remove flex item init command leftover' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix next protocol validation after flex item' " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix non full word sample fields in " Xueming Li
2024-11-11 6:28 ` patch 'net/mlx5: fix flex item header length field translation' " Xueming Li
2024-11-11 6:28 ` patch 'build: remove version check on compiler links function' " Xueming Li
2024-11-11 6:28 ` patch 'hash: fix thash LFSR initialization' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: notify flower firmware about PF speed' " Xueming Li
2024-11-11 6:28 ` patch 'net/nfp: do not set IPv6 flag in transport mode' " Xueming Li
2024-11-11 6:28 ` patch 'dmadev: fix potential null pointer access' " Xueming Li
2024-11-11 6:28 ` patch 'net/gve/base: fix build with Fedora Rawhide' " Xueming Li
2024-11-11 6:28 ` patch 'power: fix mapped lcore ID' " Xueming Li
2024-11-11 6:28 ` patch 'net/ionic: fix build with Fedora Rawhide' " Xueming Li
2024-11-11 6:28 ` patch '' " Xueming Li
2024-12-07 7:59 ` patches " Xueming Li
2024-12-07 7:59 ` patch 'net/netvsc: fix using Tx queue higher than Rx queues' " Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: restrict tunnel flow rule to one header' " Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: register VLAN flow match mode parameter' " Xueming Li
2024-12-07 7:59 ` patch 'net/ice: detect stopping a flow director queue twice' " Xueming Li
2024-12-07 7:59 ` patch 'net/ixgbe: fix link status delay on FreeBSD' " Xueming Li
2024-12-07 7:59 ` patch 'net/mvneta: fix possible out-of-bounds write' " Xueming Li
2024-12-07 7:59 ` patch 'common/cnxk: fix double free of flow aging resources' " Xueming Li
2024-12-07 7:59 ` patch 'crypto/openssl: fix 3DES-CTR with big endian CPUs' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix trace script for multiple burst completion' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix real time counter reading from PCI BAR' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix Tx tracing to use single clock source' " Xueming Li
2024-12-07 7:59 ` patch 'eal/unix: optimize thread creation' " Xueming Li
2024-12-09 7:00 ` David Marchand
2024-12-09 8:04 ` Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix memory leak in metering' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix GRE flow item translation for root table' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5/hws: fix range definer error recovery' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix SQ flow item size' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix non-template flow action validation' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix SWS meter state initialization' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix reported Rx/Tx descriptor limits' " Xueming Li
2024-12-07 7:59 ` patch 'net/mlx5: fix indirect list flow action callback invocation' " Xueming Li
2024-12-07 7:59 ` patch 'app/dumpcap: remove unused struct array' " Xueming Li
2024-12-07 7:59 ` patch 'bus/fslmc: fix Coverity warnings in QBMAN' " Xueming Li
2024-12-07 7:59 ` patch 'net/dpaa2: fix memory corruption in TM' " Xueming Li
2024-12-07 7:59 ` patch 'examples/l3fwd-power: fix options parsing overflow' " Xueming Li
2024-12-07 7:59 ` patch 'examples/l3fwd: fix read beyond boundaries' " Xueming Li
2024-12-07 7:59 ` patch 'test/bonding: remove redundant info query' " Xueming Li
2024-12-07 7:59 ` patch 'examples/ntb: check info query return' " Xueming Li
2024-12-07 7:59 ` patch 'net/netvsc: force Tx VLAN offload on 801.2Q packet' " Xueming Li
2024-12-07 7:59 ` patch 'net/vmxnet3: fix crash after configuration failure' " Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: remove ROH devices' " Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: fix SWFW mbox' " Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: fix VF-PF mbox interrupt' " Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: remove outer UDP checksum capability' " Xueming Li
2024-12-07 7:59 ` patch 'net/txgbe: fix driver load bit to inform firmware' " Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: " Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: reconfigure more MAC Rx registers' " Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: fix interrupt lost in legacy or MSI mode' " Xueming Li
2024-12-07 7:59 ` patch 'net/ngbe: restrict configuration of VLAN strip offload' " Xueming Li
2024-12-07 7:59 ` patch 'net/vmxnet3: fix potential out of bounds stats access' " Xueming Li
2024-12-07 7:59 ` patch 'net/vmxnet3: support larger MTU with version 6' " Xueming Li
2024-12-07 7:59 ` patch 'net/hns3: fix error code for repeatedly create counter' " Xueming Li
2024-12-07 8:00 ` patch 'net/hns3: fix fully use hardware flow director table' " Xueming Li
2024-12-07 8:00 ` patch 'event/octeontx: fix possible integer overflow' " Xueming Li
2024-12-07 8:00 ` patch 'baseband/acc: fix ring memory allocation' " Xueming Li
2024-12-07 8:00 ` patch 'crypto/openssl: fix potential string overflow' " Xueming Li
2024-12-07 8:00 ` patch 'test/crypto: fix synchronous API calls' " Xueming Li
2024-12-07 8:00 ` patch 'crypto/qat: fix modexp/inv length' " Xueming Li
2024-12-07 8:00 ` patch 'crypto/qat: fix ECDSA session handling' " Xueming Li
2024-12-07 8:00 ` patch 'net/igc: fix Rx buffers when timestamping enabled' " Xueming Li
2024-12-07 8:00 ` patch 'net/cpfl: fix forwarding to physical port' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix WC TCAM multi-slice delete' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix TCAM manager data corruption' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix Thor TF EM key size check' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt/tf_core: fix slice count in case of HA entry move' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt: fix reading SFF-8436 SFP EEPROMs' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt: fix TCP and UDP checksum flags' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnxt: fix bad action offset in Tx BD' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: remove dead conditional' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: fix always true expression' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: fix possible infinite loop at startup' " Xueming Li
2024-12-07 8:00 ` patch 'net/bnx2x: fix duplicate branch' " Xueming Li
2024-12-07 8:00 ` patch 'common/cnxk: fix build on Ubuntu 24.04' " Xueming Li
2024-12-07 8:00 ` patch 'net/cnxk: " Xueming Li
2024-12-09 5:42 ` [EXTERNAL] " Sunil Kumar Kori
2024-12-09 5:42 ` Sunil Kumar Kori
2024-12-07 8:00 ` patch 'examples/l2fwd-event: fix spinlock handling' " Xueming Li
2024-12-07 8:00 ` patch 'eventdev: fix possible array underflow/overflow' " Xueming Li
2024-12-07 8:00 ` patch 'net/dpaa2: remove unnecessary check for null before free' " Xueming Li
2024-12-07 8:00 ` patch 'common/mlx5: fix error CQE handling for 128 bytes CQE' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix shared queue port number in vector Rx' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5/hws: fix allocation of STCs' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix counter query loop getting stuck' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix Rx queue control management' " Xueming Li
2024-12-07 8:00 ` patch 'common/mlx5: fix misalignment' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix default RSS flows creation order' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix Rx queue reference count in flushing flows' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix miniCQEs number calculation' " Xueming Li
2024-12-07 8:00 ` patch 'bus/dpaa: fix lock condition during error handling' " Xueming Li
2024-12-07 8:00 ` patch 'net/iavf: add segment-length check to Tx prep' " Xueming Li
2024-12-07 8:00 ` patch 'net/i40e: check register read for outer VLAN' " Xueming Li
2024-12-07 8:00 ` patch 'common/dpaax/caamflib: enable fallthrough warnings' " Xueming Li
2024-12-07 8:00 ` patch 'net/e1000/base: fix fallthrough in switch' " Xueming Li
2024-12-07 8:00 ` patch 'app/procinfo: fix leak on exit' " Xueming Li
2024-12-07 8:00 ` patch 'member: fix choice of bucket for displacement' " Xueming Li
2024-12-07 8:00 ` patch 'app/testpmd: fix aged flow destroy' " Xueming Li
2024-12-07 8:00 ` patch 'net/mlx5: fix shared Rx queue control release' " Xueming Li
2024-12-07 8:00 ` patch 'vhost: fix deadlock in Rx async path' " Xueming Li
2024-12-07 8:00 ` patch 'net/txgbe: fix a mass of interrupts' " Xueming Li
2024-12-07 8:00 ` patch 'pcapng: avoid potential unaligned data' " Xueming Li
2024-12-07 8:00 ` patch 'test/bonding: fix loop on members' " Xueming Li
2024-12-07 8:00 ` patch 'test/bonding: fix MAC address comparison' " Xueming Li
2024-12-07 8:00 ` patch 'test/security: fix IPv6 extension loop' " Xueming Li
2024-12-07 8:00 ` patch 'test/event: avoid duplicate initialization' " Xueming Li
2024-12-07 8:00 ` patch 'test/eal: fix loop coverage for alignment macros' " Xueming Li
2024-12-07 8:00 ` patch 'test/eal: fix lcore check' " Xueming Li
2024-12-07 8:00 ` patch 'app/testpmd: remove redundant policy action condition' " Xueming Li
2024-12-07 8:00 ` patch 'app/testpmd: avoid potential outside of array reference' " Xueming Li
2024-12-07 8:00 ` patch 'doc: correct definition of stats per queue feature' " Xueming Li
2024-12-07 8:00 ` patch 'devtools: fix check of multiple commits fixed at once' " Xueming Li
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).