I've been running my application for years on igb_uio with Intel NICs. I recently replaced them with a Mellanox ConnectX-5 2x 40gbps NIC, updated the DPDK version my application uses, and compiled with support for mlx5 PMDs. Both 40Gbps ports are up with link, and both are in Ethernet mode, not Infiniband mode. However, I'm getting complaints when I start my application about trying to load 'mlx5_eth'? Both are bound to mlx5_core driver at the moment. When I bind them to vfio-pci, or uio_pci_generic, my application fails to recognize them at all as valid DPDK devices. Anyone have any ideas? Also, strange that it only complains about one? I have them configured in a bond on the kernel, as my application requires that. Network devices using kernel driver =================================== 0000:2b:00.0 'MT27800 Family [ConnectX-5] 1017' if=enp43s0f0np0 drv=mlx5_core unused=vfio-pci 0000:2b:00.1 'MT27800 Family [ConnectX-5] 1017' if=enp43s0f1np1 drv=mlx5_core unused=vfio-pci root@DDoSMitigation:~/anubis/engine/bin# ./anubis-engine Electric Fence 2.2 Copyright (C) 1987-1999 Bruce Perens EAL: Detected CPU lcores: 12 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:2b:00.0 (socket -1) EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:2b:00.1 (socket -1) mlx5_net: PF 0 doesn't have Verbs device matches PCI device 0000:2b:00.1, are kernel drivers loaded? mlx5_common: Failed to load driver mlx5_eth EAL: Requested device 0000:2b:00.1 cannot be used TELEMETRY: No legacy callbacks, legacy socket not created USER1: Anubis build master/. USER1: We will run on 12 logical cores. USER1: Enabled lcores not a power of 2! This could have performance issues. KNI: WARNING: KNI is deprecated and will be removed in DPDK 23.11 USER1: Failed to reset link fe0.