* dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC
@ 2025-02-24 20:19 Gábor LENCSE
2025-02-24 20:37 ` Dmitry Kozlyuk
0 siblings, 1 reply; 3+ messages in thread
From: Gábor LENCSE @ 2025-02-24 20:19 UTC (permalink / raw)
To: users
[-- Attachment #1: Type: text/plain, Size: 4128 bytes --]
Hi Folks,
I am trying to run dpdk-testpmd on a Dell PowerEdge R730 server with
Mellanox ConnectX-4 Lx NIC card. I can bind the vfio-pci driver:
root@dut3:~# dpdk-devbind.py --status
Network devices using DPDK-compatible driver
============================================
0000:05:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci
unused=mlx5_core,uio_pci_generic
0000:05:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci
unused=mlx5_core,uio_pci_generic
But EAL gives the following messages:
root@dut3:~# dpdk-testpmd
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0)
mlx5_pci: no Verbs device matches PCI device 0000:05:00.0, are kernel
drivers loaded?
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:05:00.0 cannot be used
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0)
mlx5_pci: no Verbs device matches PCI device 0000:05:00.1, are kernel
drivers loaded?
common_mlx5: Failed to load driver = mlx5_pci.
EAL: Requested device 0000:05:00.1 cannot be used
EAL: Probe PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1)
testpmd: create a new mbuf pool <mb_pool_0>: n=235456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=235456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
net_mlx4: 0x55e85e1b4b40: cannot attach flow rules (code 95, "Operation
not supported"), flow error type 2, cause 0x220040d200, message: flow
rule rejected by device
Fail to start port 0
Configuring Port 1 (socket 1)
net_mlx4: 0x55e85e1b8c00: cannot attach flow rules (code 95, "Operation
not supported"), flow error type 2, cause 0x220040cd80, message: flow
rule rejected by device
Fail to start port 1
Please stop the ports first
Done
No commandline core given, start packet forwarding
Not all ports were started
Press enter to exit
It seems that indeed there is no such driver as "mlx_pci", as the 'find
/ -name "*mlx5_pci*" ' command gives no results.
Do I need to install something?
I tried doing some Google search for 'common_mlx5: Failed to load driver
= mlx5_pci.', but the hits did not help. As for the first hit (
https://inbox.dpdk.org/users/CAE4=sSdsN7_CFMOS5ZF-3fEBLhN2af8+twJcO2t4xadNwTV68w@mail.gmail.com/T/
), the result of the recommended checking is the following:
root@dut3:~# lsmod | grep mlx5_
mlx5_ib 385024 0
ib_uverbs 167936 3 mlx4_ib,rdma_ucm,mlx5_ib
ib_core 413696 11
rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
mlx5_core 1183744 1 mlx5_ib
mlxfw 32768 1 mlx5_core
ptp 32768 4 igb,mlx4_en,mlx5_core,ixgbe
pci_hyperv_intf 16384 1 mlx5_core
The software versions are the following ones:
root@dut3:~# cat /etc/debian_version
11.9
root@dut3:~# uname -r
5.10.0-27-amd64
root@dut3:~# apt list dpdk
Listing... Done
dpdk/oldstable,now 20.11.10-1~deb11u1 amd64 [installed]
I also tried using the uio_pci_generic instead of vfio-pci, and the
result was the same.
However, everything works fine with an X540-AT2 NIC.
Please advise me, how to resolve the issue!
As you may have noticed, there is another Mellanox NIC in the server,
but with that one, the situation is even worse. The two ports have the
same PCI address, and thus I cannot bind a DPDK compatible drive to its
second port. Here is the dpdk-devbind output for the card:
0000:82:00.0 'MT27520 Family [ConnectX-3 Pro] 1007'
if=enp130s0d1,enp130s0 drv=mlx4_core unused=vfio-pci,uio_pci_generic
Your advice for resolving that issue would also be helpful to me. :-)
Thank you very much in advance!
Best regards,
Gábor Lencse
[-- Attachment #2: Type: text/html, Size: 6955 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC
2025-02-24 20:19 dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC Gábor LENCSE
@ 2025-02-24 20:37 ` Dmitry Kozlyuk
2025-02-24 21:32 ` Gábor LENCSE
0 siblings, 1 reply; 3+ messages in thread
From: Dmitry Kozlyuk @ 2025-02-24 20:37 UTC (permalink / raw)
To: Gábor LENCSE; +Cc: users
Hi Gabor,
2025-02-24 21:19 (UTC+0100), Gábor LENCSE:
> I am trying to run dpdk-testpmd on a Dell PowerEdge R730 server with
> Mellanox ConnectX-4 Lx NIC card. I can bind the vfio-pci driver:
>
> root@dut3:~# dpdk-devbind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:05:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci
> unused=mlx5_core,uio_pci_generic
> 0000:05:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci
> unused=mlx5_core,uio_pci_generic
Mellanox NICs need mlx5_core kernel driver even when used via DPDK.
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC
2025-02-24 20:37 ` Dmitry Kozlyuk
@ 2025-02-24 21:32 ` Gábor LENCSE
0 siblings, 0 replies; 3+ messages in thread
From: Gábor LENCSE @ 2025-02-24 21:32 UTC (permalink / raw)
To: users
[-- Attachment #1: Type: text/plain, Size: 2592 bytes --]
Hi Dmitry,
2025. 02. 24. 21:37 keltezéssel, Dmitry Kozlyuk írta:
> Mellanox NICs need mlx5_core kernel driver even when used via DPDK.
Indeed, it works! Thank you very much! I would have never thought of that.
However, it seems to be rather slow and it loses frames.
With my siitperf ( https://github.com/lencsegabor/siitperf ) measurement
program I can achieve 7.1Mfps per direction using bidirectional traffic
with 64-byte frame size when I use X540. And the bottleneck is surely
the X540, as the numbers are the same when I use fixed IP addresses and
port numbers and when any or both of them varies. With X710, I can
achieve more that 8Mfps using RFC 4814 pseudorandom port numbers and
more than 10Mfps using fixed frames (using bidirectional traffic with
64-byte frame size).
However the first step of the binary search of the throughput test using
8Mfps rate lasted more than 120s (instead of 60s).
What is worse, the binary search counts down to 0 due to the loss of a
small amount of packets.
Here is the current output:
--------------------------------------------------- 2025-02-24
22:26:18.165766809 Iteration no. 1-8
--------------------------------------------------- Testing rate: 31250
fps. EAL: Detected 16 lcore(s) EAL: Detected 2 NUMA nodes EAL: Detected
shared linkage of DPDK EAL: Multi-process socket
/var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing
VFIO support... EAL: VFIO support initialized EAL: Probe PCI driver:
mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0) EAL: Probe PCI
driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0) EAL: Probe
PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1) EAL: No
legacy callbacks, legacy socket not created Info: Left port and Left
Sender CPU core belong to the same NUMA node: 0 Info: Right port and
Right Receiver CPU core belong to the same NUMA node: 0 Info: Right port
and Right Sender CPU core belong to the same NUMA node: 0 Info: Left
port and Left Receiver CPU core belong to the same NUMA node: 0 Info:
Testing initiated at 2025-02-24 22:26:19 Info: Forward sender's sending
took 59.9999682437 seconds. Forward frames sent: 1875000 Info: Reverse
sender's sending took 59.9999682300 seconds. Reverse frames sent:
1875000 Reverse frames received: 1874963 Forward frames received:
1874958 Info: Test finished. Forward: 1874958 frames were received from
the required 1875000 frames Reverse: 1874963 frames were received from
the required 1875000 frames TEST FAILED
Do you have any idea, what is happening here?
Best regards,
Gábor
[-- Attachment #2: Type: text/html, Size: 3815 bytes --]
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2025-02-24 21:33 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-02-24 20:19 dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC Gábor LENCSE
2025-02-24 20:37 ` Dmitry Kozlyuk
2025-02-24 21:32 ` Gábor LENCSE
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).