Hi Thomas, Actually I remembered in my previous setup I had run dpdk-devbind.py to bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do this and just wanted to confirm that this is correct. Best, Aaron On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee wrote: > Hi Thomas, > > I tried installing things from scratch two days ago and have gotten > things working! I think part of the problem was figuring out the correct > hugepage allocation for my system. If I recall correctly, I tried setting > up my system with default page size 1G but perhaps didn't have enough pages > allocated at the time. Currently have the following which gives me the > output you've shown previously. > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s > Node Pages Size Total > 0 16 1Gb 16Gb > 1 16 1Gb 16Gb > > root@yeti-04:~/dpdk-21.11# echo show port summary all | > build/app/dpdk-testpmd --in-memory -- -i > EAL: Detected CPU lcores: 80 > EAL: Detected NUMA nodes: 2 > EAL: Detected static linkage of DPDK > EAL: Selected IOVA mode 'PA' > EAL: No free 2048 kB hugepages reported on node 0 > EAL: No free 2048 kB hugepages reported on node 1 > EAL: No available 2048 kB hugepages reported > EAL: VFIO support initialized > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1) > TELEMETRY: No legacy callbacks, legacy socket not created > Interactive-mode selected > testpmd: create a new mbuf pool : n=779456, size=2176, socket=0 > testpmd: preferred mempool ops selected: ring_mp_mc > testpmd: create a new mbuf pool : n=779456, size=2176, socket=1 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=paired and odd forward ports number, the last port > will pair with itself. > > Configuring Port 0 (socket 1) > Port 0: EC:0D:9A:68:21:A8 > Checking link statuses... > Done > testpmd> show port summary all > Number of available ports: 1 > Port MAC Address Name Driver Status Link > 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps > > Best, > Aaron > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon > wrote: > >> 21/02/2022 19:52, Thomas Monjalon: >> > 18/02/2022 22:12, Aaron Lee: >> > > Hello, >> > > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm >> > > wondering if the card I have simply isn't compatible. I first noticed >> that >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the >> error >> > > logs when running dpdk-pdump. >> > >> > When testing a NIC, it is more convenient to use dpdk-testpmd. >> > >> > > EAL: Detected CPU lcores: 80 >> > > EAL: Detected NUMA nodes: 2 >> > > EAL: Detected static linkage of DPDK >> > > EAL: Multi-process socket >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp >> > > vdev_scan(): Failed to request vdev from primary >> > > EAL: Selected IOVA mode 'PA' >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync >> > > EAL: Cannot request default VFIO container fd >> > > EAL: VFIO support could not be initialized >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 >> (socket 1) >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp >> > > mlx5_common: port 0 request to primary process failed >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering >> an >> > > error: No such file or directory >> > > mlx5_common: Failed to load driver mlx5_eth >> > > EAL: Requested device 0000:af:00.0 cannot be used >> > > EAL: Error - exiting with code: 1 >> > > Cause: No Ethernet ports - bye >> > >> > From this log, we miss the previous steps before running the >> application. >> > >> > Please check these simple steps: >> > - install rdma-core >> > - build dpdk (meson build && ninja -C build) >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd >> --in-memory -- -i) >> > >> > EAL: Detected CPU lcores: 10 >> > EAL: Detected NUMA nodes: 1 >> > EAL: Detected static linkage of DPDK >> > EAL: Selected IOVA mode 'PA' >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 >> (socket 0) >> > Interactive-mode selected >> > testpmd: create a new mbuf pool : n=219456, size=2176, >> socket=0 >> > testpmd: preferred mempool ops selected: ring_mp_mc >> > Configuring Port 0 (socket 0) >> > Port 0: 0C:42:A1:D6:E0:00 >> > Checking link statuses... >> > Done >> > testpmd> show port summary all >> > Number of available ports: 1 >> > Port MAC Address Name Driver Status Link >> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps >> > >> > > I noticed that the pci id of the card I was given is 15b3:1017 as >> below. >> > > This sort of indicates to me that the PMD driver isn't supported on >> this >> > > card. >> > >> > This card is well supported and even officially tested with DPDK 21.11, >> > as you can see in the release notes: >> > >> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e= >> > >> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 >> Family >> > > [ConnectX-5] [15b3:1017] >> > > >> > > I'd appreciate it if someone has gotten this card to work with DPDK to >> > > point me in the right direction or if my suspicions were correct that >> this >> > > card doesn't work with the PMD. >> >> If you want to check which hardware is supported by a PMD, >> you can use this command: >> >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so >> PMD NAME: mlx5_eth >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib >> PMD HW SUPPORT: >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual >> Function] (1014) (All Subdevices) >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) >> (All Subdevices) >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual >> Function] (1016) (All Subdevices) >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual >> Function] (1018) (All Subdevices) >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) >> (All Subdevices) >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual >> Function] (101a) (All Subdevices) >> Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 >> network controller (a2d2) (All Subdevices) >> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family >> VF (a2d3) (All Subdevices) >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual >> Function] (101c) (All Subdevices) >> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All >> Subdevices) >> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function >> (101e) (All Subdevices) >> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6 >> Dx network controller (a2d6) (All Subdevices) >> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7 >> network controller (a2dc) (All Subdevices) >> >> > Please tell me what drove you into the wrong direction, >> > because I really would like to improve the documentation & tools. >> >> >> >>