* Failing to parse pci device
@ 2025-04-08 16:34 David Aldrich
2025-04-08 16:51 ` David Aldrich
0 siblings, 1 reply; 3+ messages in thread
From: David Aldrich @ 2025-04-08 16:34 UTC (permalink / raw)
To: users
Hi
I am trying to build a legacy application with DPDK 19.11.14. It links
successfully but fails to parse the whitelisted pci device at runtime:
EAL parameters: phy_app --proc-type=primary --file-prefix wls -w 0000:43:00.1
EAL: Detected 64 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: failed to parse device "0000:43:00.1"
EAL: Unable to parse device '0000:43:00.1'
I think this suggests that the rte_bus_pci library is not loaded?
I suspect my linker command is incorrect. I am using CMake and I
detect the dpdk install using pkg-config. My linker directive is:
target_link_libraries(testApp PRIVATE
-Wl,--start-group
-lpthread
-lrt
-lhugetlbfs
-Wl,-lm
-Wl,-lnuma
-L${WLS_LIB_PATH}
-lwls
-L${_dpdk_lib_path}
-Wl,--whole-archive
${DPDK_STATIC_LDFLAGS} # DPDK libraries - static linking
-Wl,--no-whole-archive
-Wl,--end-group
)
I don't understand well the linker directives such as 'start-group'
and 'whole-archive'. Please could someone review the
target_link_libraries directive above and suggest what may be wrong?
I should mention that the DPDK 23.11 driver is running on the target
server, but I get a similar parse error if I build with DPDK 23.11.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Failing to parse pci device
2025-04-08 16:34 Failing to parse pci device David Aldrich
@ 2025-04-08 16:51 ` David Aldrich
2025-04-09 17:24 ` David Aldrich
0 siblings, 1 reply; 3+ messages in thread
From: David Aldrich @ 2025-04-08 16:51 UTC (permalink / raw)
To: users
I should have mentioned that the actual, expanded linker command is:
/usr/bin/cc -g <snip> -Wl,--start-group -lpthread -lrt -lhugetlbfs
-Wl,-lm -Wl,-lnuma -L<snip>/wls_lib -lwls
-L/opt/dpdk/dpdk-stable-19.11.14/x86_64-native-linux-gcc/lib/x86_64-linux-gnu/
-Wl,--whole-archive -L/lib/x86_64-linux-gnu -lrte_bpf
-lrte_flow_classify -lrte_pipeline -lrte_table -lrte_port
-lrte_fib -lrte_ipsec -lrte_vhost -lrte_stack -lrte_security
-lrte_sched -lrte_reorder -lrte_rib -lrte_rcu -lrte_rawdev
-lrte_pdump -lrte_power -lrte_member -lrte_lpm -lrte_latencystats
-lrte_kni -lrte_jobstats -lrte_ip_frag -lrte_gso -lrte_gro
-lrte_eventdev -lrte_efd -lrte_distributor -lrte_cryptodev
-lrte_compressdev -lrte_cfgfile -lrte_bitratestats -lrte_bbdev
-lrte_acl -lrte_timer -lrte_hash -lrte_metrics -lrte_cmdline
-lrte_pci -lrte_ethdev -lrte_meter -lrte_net -lrte_mbuf
-lrte_mempool -lrte_ring -lrte_eal -lrte_kvargs
-Wl,--whole-archive -L/lib/x86_64-linux-gnu -lrte_common_cpt
-lrte_common_dpaax -lrte_common_octeontx -lrte_common_octeontx2
-lrte_bus_dpaa -lrte_bus_fslmc -lrte_bus_ifpga -lrte_bus_pci
-lrte_bus_vdev -lrte_bus_vmbus -lrte_mempool_bucket
-lrte_mempool_dpaa -lrte_mempool_dpaa2 -lrte_mempool_octeontx
-lrte_mempool_octeontx2 -lrte_mempool_ring -lrte_mempool_stack
-lrte_pmd_af_packet -lrte_pmd_ark -lrte_pmd_atlantic -lrte_pmd_avp
-lrte_pmd_axgbe -lrte_pmd_bond -lrte_pmd_bnx2x -lrte_pmd_bnxt
-lrte_pmd_cxgbe -lrte_pmd_dpaa -lrte_pmd_dpaa2 -lrte_pmd_e1000
-lrte_pmd_ena -lrte_pmd_enetc -lrte_pmd_enic -lrte_pmd_failsafe
-lrte_pmd_fm10k -lrte_pmd_i40e -lrte_pmd_hinic -lrte_pmd_hns3
-lrte_pmd_iavf -lrte_pmd_ice -lrte_pmd_ifc -lrte_pmd_ixgbe
-lrte_pmd_kni -lrte_pmd_liquidio -lrte_pmd_memif -lrte_pmd_netvsc
-lrte_pmd_nfp -lrte_pmd_null -lrte_pmd_octeontx -lrte_pmd_octeontx2
-lrte_pmd_pfe -lrte_pmd_qede -lrte_pmd_ring -lrte_pmd_sfc
-lrte_pmd_softnic -lrte_pmd_tap -lrte_pmd_thunderx
-lrte_pmd_vdev_netvsc -lrte_pmd_vhost -lrte_pmd_virtio
-lrte_pmd_vmxnet3 -lrte_rawdev_dpaa2_cmdif -lrte_rawdev_dpaa2_qdma
-lrte_rawdev_ioat -lrte_rawdev_ntb -lrte_rawdev_octeontx2_dma
-lrte_rawdev_skeleton -lrte_pmd_caam_jr -lrte_pmd_dpaa_sec
-lrte_pmd_dpaa2_sec -lrte_pmd_nitrox -lrte_pmd_null_crypto
-lrte_pmd_octeontx_crypto -lrte_pmd_octeontx2_crypto
-lrte_pmd_crypto_scheduler -lrte_pmd_virtio_crypto
-lrte_pmd_octeontx_compress -lrte_pmd_qat -lrte_pmd_zlib
-lrte_pmd_dpaa_event -lrte_pmd_dpaa2_event -lrte_pmd_octeontx2_event
-lrte_pmd_opdl_event -lrte_pmd_skeleton_event -lrte_pmd_sw_event
-lrte_pmd_dsw_event -lrte_pmd_octeontx_event -lrte_pmd_bbdev_null
-lrte_pmd_bbdev_turbo_sw -lrte_pmd_bbdev_fpga_lte_fec
-Wl,--no-whole-archive -Wl,--export-dynamic -lrte_bpf
-lrte_flow_classify -lrte_pipeline -lrte_table -lrte_port
-lrte_fib -lrte_ipsec -lrte_vhost -lrte_stack -lrte_security
-lrte_sched -lrte_reorder -lrte_rib -lrte_rcu -lrte_rawdev
-lrte_pdump -lrte_power -lrte_member -lrte_lpm -lrte_latencystats
-lrte_kni -lrte_jobstats -lrte_ip_frag -lrte_gso -lrte_gro
-lrte_eventdev -lrte_efd -lrte_distributor -lrte_cryptodev
-lrte_compressdev -lrte_cfgfile -lrte_bitratestats -lrte_bbdev
-lrte_acl -lrte_timer -lrte_hash -lrte_metrics -lrte_cmdline
-lrte_pci -lrte_ethdev -lrte_meter -lrte_net -lrte_mbuf
-lrte_mempool -lrte_ring -lrte_eal -lrte_kvargs -Wl,-Bdynamic
-pthread -lm -ldl -lnuma -L/usr/lib/x86_64-linux-gnu
-L/usr/lib/x86_64-linux-gnu -lz -lelf -L/usr/lib/x86_64-linux-gnu
-L/usr/lib/x86_64-linux-gnu -lz -Wl,--no-whole-archive
-Wl,--end-group -lrte_common_cpt -lrte_common_dpaax
-lrte_common_octeontx -lrte_common_octeontx2 -lrte_bus_dpaa
-lrte_bus_fslmc -lrte_bus_ifpga -lrte_bus_pci -lrte_bus_vdev
-lrte_bus_vmbus -lrte_mempool_bucket -lrte_mempool_dpaa
-lrte_mempool_dpaa2 -lrte_mempool_octeontx -lrte_mempool_octeontx2
-lrte_mempool_ring -lrte_mempool_stack -lrte_pmd_af_packet
-lrte_pmd_ark -lrte_pmd_atlantic -lrte_pmd_avp -lrte_pmd_axgbe
-lrte_pmd_bond -lrte_pmd_bnx2x -lrte_pmd_bnxt -lrte_pmd_cxgbe
-lrte_pmd_dpaa -lrte_pmd_dpaa2 -lrte_pmd_e1000 -lrte_pmd_ena
-lrte_pmd_enetc -lrte_pmd_enic -lrte_pmd_failsafe -lrte_pmd_fm10k
-lrte_pmd_i40e -lrte_pmd_hinic -lrte_pmd_hns3 -lrte_pmd_iavf
-lrte_pmd_ice -lrte_pmd_ifc -lrte_pmd_ixgbe -lrte_pmd_kni
-lrte_pmd_liquidio -lrte_pmd_memif -lrte_pmd_netvsc -lrte_pmd_nfp
-lrte_pmd_null -lrte_pmd_octeontx -lrte_pmd_octeontx2 -lrte_pmd_pfe
-lrte_pmd_qede -lrte_pmd_ring -lrte_pmd_sfc -lrte_pmd_softnic
-lrte_pmd_tap -lrte_pmd_thunderx -lrte_pmd_vdev_netvsc
-lrte_pmd_vhost -lrte_pmd_virtio -lrte_pmd_vmxnet3
-lrte_rawdev_dpaa2_cmdif -lrte_rawdev_dpaa2_qdma -lrte_rawdev_ioat
-lrte_rawdev_ntb -lrte_rawdev_octeontx2_dma -lrte_rawdev_skeleton
-lrte_pmd_caam_jr -lrte_pmd_dpaa_sec -lrte_pmd_dpaa2_sec
-lrte_pmd_nitrox -lrte_pmd_null_crypto -lrte_pmd_octeontx_crypto
-lrte_pmd_octeontx2_crypto -lrte_pmd_crypto_scheduler
-lrte_pmd_virtio_crypto -lrte_pmd_octeontx_compress -lrte_pmd_qat
-lrte_pmd_zlib -lrte_pmd_dpaa_event -lrte_pmd_dpaa2_event
-lrte_pmd_octeontx2_event -lrte_pmd_opdl_event
-lrte_pmd_skeleton_event -lrte_pmd_sw_event -lrte_pmd_dsw_event
-lrte_pmd_octeontx_event -lrte_pmd_bbdev_null
-lrte_pmd_bbdev_turbo_sw -lrte_pmd_bbdev_fpga_lte_fec -lm -ldl
-lnuma -lz -lelf -lz -lelf && :
On Tue, Apr 8, 2025 at 5:34 PM David Aldrich
<david.aldrich.ntml@gmail.com> wrote:
>
> Hi
> I am trying to build a legacy application with DPDK 19.11.14. It links
> successfully but fails to parse the whitelisted pci device at runtime:
>
> EAL parameters: phy_app --proc-type=primary --file-prefix wls -w 0000:43:00.1
> EAL: Detected 64 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: failed to parse device "0000:43:00.1"
> EAL: Unable to parse device '0000:43:00.1'
>
> I think this suggests that the rte_bus_pci library is not loaded?
>
> I suspect my linker command is incorrect. I am using CMake and I
> detect the dpdk install using pkg-config. My linker directive is:
>
> target_link_libraries(testApp PRIVATE
> -Wl,--start-group
> -lpthread
> -lrt
> -lhugetlbfs
> -Wl,-lm
> -Wl,-lnuma
> -L${WLS_LIB_PATH}
> -lwls
> -L${_dpdk_lib_path}
> -Wl,--whole-archive
> ${DPDK_STATIC_LDFLAGS} # DPDK libraries - static linking
> -Wl,--no-whole-archive
> -Wl,--end-group
> )
>
> I don't understand well the linker directives such as 'start-group'
> and 'whole-archive'. Please could someone review the
> target_link_libraries directive above and suggest what may be wrong?
>
> I should mention that the DPDK 23.11 driver is running on the target
> server, but I get a similar parse error if I build with DPDK 23.11.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: Failing to parse pci device
2025-04-08 16:51 ` David Aldrich
@ 2025-04-09 17:24 ` David Aldrich
0 siblings, 0 replies; 3+ messages in thread
From: David Aldrich @ 2025-04-09 17:24 UTC (permalink / raw)
To: users
This is now fixed. The problem was a driver version mismatch.
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-04-09 17:25 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-04-08 16:34 Failing to parse pci device David Aldrich
2025-04-08 16:51 ` David Aldrich
2025-04-09 17:24 ` David Aldrich
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).