I figured out the initial issue. For some reason, having both devices in a bond on the kernel results in only 1 of the two ports being exposed as 'verb' devices. Previously, ibv_devinfo returned only one port. After removing both from the bond, ibv_devinfo returns both ports, and the DPDK application successfully takes both in. I'm still having some weird behavior trying to create a bypass interface with these ports though. I'm using the same code that I've been using on my Intel NICs with igb_uio for years, but seeing weird behavior. The ports are connected to our 40Gbps Ethernet switch, and set to link_layer: Ethernet. The first thing I noticed is that rte_eth_dev_reset() fails on these interfaces with "ENOTSUP: hardware doesn't support reset". Secondly, when checking ptypes, I noticed my code says these NICs are unable to support any sort of packet detection capability (code below, all return false). The MLX5 docs do say that all of these ptypes used here are supported by MLX5. I'm just picking up a project that was left off by an older dev. It hasn't been touched in years, but has been working fine with our Intel NICs. All I'm trying to do is update DPDK (which is done, updated from dpdk 19.05 to DPDK 22.11, latest version with KNI support), and get it to work with our Mellanox CX5 NICs. This is my first time working with DPDK and I'm not very familiar. Should I expect to be able to do this without having to make a ton of code changes, or is this going to be an uphill battle for me? If it's the latter, I will likely just go purchase Intel NICs and give up on this. bool Manager::HasPacketTypeDetectionCapabilities(uint16_t port_id) { // Detect L2 packet type detection capability int num_ptypes = rte_eth_dev_get_supported_ptypes( port_id, RTE_PTYPE_ALL_MASK, nullptr, 0); if (num_ptypes == -1) { RTE_LOG(WARNING, USER1, "Driver does not report supported ptypes; enabling software parsing.\n"); // Proceed with software-based packet type parsing return true; } bool detect_ether = false, detect_arp = false; auto num_l2_dtx = rte_eth_dev_get_supported_ptypes(port_id, RTE_PTYPE_L2_MASK, nullptr, 0); if (num_l2_dtx == -1) { RTE_LOG(ERR, USER1, "Failed to detect L2 detection capabilities.\n"); return false; } uint32_t l2_ptypes[num_l2_dtx]; num_l2_dtx = rte_eth_dev_get_supported_ptypes(port_id, RTE_PTYPE_L2_MASK, l2_ptypes, num_l2_dtx); for (auto i = 0; i < num_l2_dtx; ++i) { if (l2_ptypes[i] & RTE_PTYPE_L2_ETHER) detect_ether = true; } // Detect L3 packet type detection capability bool detect_ipv4 = false, detect_ipv6 = false; auto num_l3_dtx = rte_eth_dev_get_supported_ptypes(port_id, RTE_PTYPE_L3_MASK, nullptr, 0); if (num_l3_dtx == -1) { RTE_LOG(ERR, USER1, "Failed to detect L3 detection capabilities.\n"); return false; } uint32_t l3_ptypes[num_l3_dtx]; num_l3_dtx = rte_eth_dev_get_supported_ptypes(port_id, RTE_PTYPE_L3_MASK, l3_ptypes, num_l3_dtx); for (auto i = 0; i < num_l3_dtx; ++i) { if (l3_ptypes[i] & RTE_PTYPE_L3_IPV4) detect_ipv4 = true; if (l3_ptypes[i] & RTE_PTYPE_L3_IPV6) detect_ipv6 = true; } // Detect L4 packet type detection capability bool detect_tcp = false, detect_udp = false, detect_icmp = false; auto num_l4_dtx = rte_eth_dev_get_supported_ptypes(port_id, RTE_PTYPE_L4_MASK, nullptr, 0); if (num_l4_dtx == -1) { RTE_LOG(ERR, USER1, "Failed to detect L4 detection capabilities.\n"); return false; } uint32_t l4_ptypes[num_l4_dtx]; num_l4_dtx = rte_eth_dev_get_supported_ptypes(port_id, RTE_PTYPE_L4_MASK, l4_ptypes, num_l4_dtx); for (auto i = 0; i < num_l4_dtx; ++i) { if (l4_ptypes[i] & RTE_PTYPE_L4_TCP) detect_tcp = true; if (l4_ptypes[i] & RTE_PTYPE_L4_UDP) detect_udp = true; if (l4_ptypes[i] & RTE_PTYPE_L4_ICMP) detect_icmp = true; } if (!detect_ether || !detect_arp || !detect_ipv4 || !detect_ipv6 || !detect_tcp || !detect_udp || !detect_icmp) { RTE_LOG(ERR, USER1, "Supported Detection Modes:\n" "L2 Ether: %s\n" "L2 Arp : %s\n" "L3 IPv4 : %s\n" "L3 IPv6 : %s\n" "L4 TCP : %s\n" "L4 UDP : %s\n" "L4 ICMP : %s\n", detect_ether ? "True" : "False", detect_arp ? "True": "False", detect_ipv4 ? "True": "False", detect_ipv6 ? "True": "False", detect_tcp ? "True" : "False", detect_udp ? "True" : "False", detect_icmp ? "True" : "False"); return false; } return true; } On Wed, Nov 13, 2024 at 9:10 PM Yasuhiro Ohara wrote: > I would suggest re-installation of MELLANOX OFED, and/or upgrading NIC > firmware (can be done using OFED tools). > > Yes, warning on only 1 port is odd. I suspect some kind of mismatch, like > older version of the NIC had only 1 port versions and the soft assumed it, > for example. > > > 2024年11月14日(木) 5:43 CJ Sculti : > >> I'm not using vfio, I just bound interfaces on there one time to test. >> Shouldn't I be able to just use the default mlx5_core driver, without >> binding to uio_pci_generic? >> >> >> On Wed, Nov 13, 2024 at 4:26 PM Thomas Monjalon >> wrote: >> >>> 13/11/2024 21:10, CJ Sculti: >>> > I've been running my application for years on igb_uio with Intel NICs. >>> I >>> > recently replaced them with a Mellanox ConnectX-5 2x 40gbps NIC, >>> updated >>> > the DPDK version my application uses, and compiled with support for >>> mlx5 >>> > PMDs. Both 40Gbps ports are up with link, and both are in Ethernet >>> mode, >>> > not Infiniband mode. However, I'm getting complaints when I start my >>> > application about trying to load 'mlx5_eth'? Both are bound to >>> mlx5_core >>> > driver at the moment. When I bind them to vfio-pci, or >>> uio_pci_generic, my >>> > application fails to recognize them at all as valid DPDK devices. >>> Anyone >>> > have any ideas? Also, strange that it only complains about one? I have >>> them >>> > configured in a bond on the kernel, as my application requires that. >>> >>> You must not bind mlx5 devices with VFIO. >>> I recommend reading documentation. >>> You can start here: >>> >>> https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#bifurcated-driver >>> then >>> https://doc.dpdk.org/guides/platform/mlx5.html#design >>> >>> >>> >>>