From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D71EB462AE for ; Mon, 24 Feb 2025 21:19:22 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA723427B5; Mon, 24 Feb 2025 21:19:22 +0100 (CET) Received: from frogstar.hit.bme.hu (frogstar.hit.bme.hu [152.66.248.44]) by mails.dpdk.org (Postfix) with ESMTP id BF2EE40DD0 for ; Mon, 24 Feb 2025 21:19:21 +0100 (CET) Received: from [IPV6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e] (ipv6-0f7dd70064338b402b47292e.customers.kabelnet.hu [IPv6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e]) (authenticated bits=0) by frogstar.hit.bme.hu (8.18.1/8.17.1) with ESMTPSA id 51OKJAox094748 (version=TLSv1.3 cipher=TLS_AES_128_GCM_SHA256 bits=128 verify=NO) for ; Mon, 24 Feb 2025 21:19:19 +0100 (CET) (envelope-from lencse@hit.bme.hu) X-Authentication-Warning: frogstar.hit.bme.hu: Host ipv6-0f7dd70064338b402b47292e.customers.kabelnet.hu [IPv6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e] claimed to be [IPV6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e] Content-Type: multipart/alternative; boundary="------------VSdfl0K6NN5tZqPk2k0w0q4Y" Message-ID: Date: Mon, 24 Feb 2025 21:19:04 +0100 MIME-Version: 1.0 User-Agent: Mozilla Thunderbird To: "users@dpdk.org" Content-Language: hu, en-GB From: =?UTF-8?Q?G=C3=A1bor_LENCSE?= Subject: dpdk-testpmd fails with a Mellanox ConnectX-4 Lx NIC X-Virus-Scanned: clamav-milter 0.103.12 at frogstar.hit.bme.hu X-Virus-Status: Clean Received-SPF: pass (frogstar.hit.bme.hu: authenticated connection) receiver=frogstar.hit.bme.hu; client-ip=2a03:bf01:f7d:d700:6433:8b40:2b47:292e; helo=[IPV6:2a03:bf01:f7d:d700:6433:8b40:2b47:292e]; envelope-from=lencse@hit.bme.hu; x-software=spfmilter 2.001 http://www.acme.com/software/spfmilter/ with libspf2-1.2.11; X-DCC--Metrics: frogstar.hit.bme.hu; whitelist X-Spam-Status: No, score=-0.6 required=5.0 tests=ALL_TRUSTED, AWL, HTML_MESSAGE, TW_EV,TW_LX,TW_PD,TW_VB,TW_VF,TW_XF,T_SCC_BODY_TEXT_LINE autolearn=disabled version=3.4.6-frogstar X-Spam-Checker-Version: SpamAssassin 3.4.6-frogstar (2021-04-09) on frogstar.hit.bme.hu X-Scanned-By: MIMEDefang 2.86 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org This is a multi-part message in MIME format. --------------VSdfl0K6NN5tZqPk2k0w0q4Y Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Hi Folks, I am trying to run dpdk-testpmd on a Dell PowerEdge R730 server with Mellanox ConnectX-4 Lx NIC card. I can bind the vfio-pci driver: root@dut3:~# dpdk-devbind.py --status Network devices using DPDK-compatible driver ============================================ 0000:05:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci unused=mlx5_core,uio_pci_generic 0000:05:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci unused=mlx5_core,uio_pci_generic But EAL gives the following messages: root@dut3:~# dpdk-testpmd EAL: Detected 16 lcore(s) EAL: Detected 2 NUMA nodes EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: Probing VFIO support... EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0) mlx5_pci: no Verbs device matches PCI device 0000:05:00.0, are kernel drivers loaded? common_mlx5: Failed to load driver = mlx5_pci. EAL: Requested device 0000:05:00.0 cannot be used EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0) mlx5_pci: no Verbs device matches PCI device 0000:05:00.1, are kernel drivers loaded? common_mlx5: Failed to load driver = mlx5_pci. EAL: Requested device 0000:05:00.1 cannot be used EAL: Probe PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1) testpmd: create a new mbuf pool : n=235456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=235456, size=2176, socket=1 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 1) net_mlx4: 0x55e85e1b4b40: cannot attach flow rules (code 95, "Operation not supported"), flow error type 2, cause 0x220040d200, message: flow rule rejected by device Fail to start port 0 Configuring Port 1 (socket 1) net_mlx4: 0x55e85e1b8c00: cannot attach flow rules (code 95, "Operation not supported"), flow error type 2, cause 0x220040cd80, message: flow rule rejected by device Fail to start port 1 Please stop the ports first Done No commandline core given, start packet forwarding Not all ports were started Press enter to exit It seems that indeed there is no such driver as "mlx_pci", as the 'find / -name "*mlx5_pci*" '  command gives no results. Do I need to install something? I tried doing some Google search for 'common_mlx5: Failed to load driver = mlx5_pci.', but the hits did not help. As for the first hit ( https://inbox.dpdk.org/users/CAE4=sSdsN7_CFMOS5ZF-3fEBLhN2af8+twJcO2t4xadNwTV68w@mail.gmail.com/T/ ), the result of the recommended checking is the following: root@dut3:~# lsmod | grep mlx5_ mlx5_ib               385024  0 ib_uverbs             167936  3 mlx4_ib,rdma_ucm,mlx5_ib ib_core               413696  11 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm mlx5_core            1183744  1 mlx5_ib mlxfw                  32768  1 mlx5_core ptp                    32768  4 igb,mlx4_en,mlx5_core,ixgbe pci_hyperv_intf        16384  1 mlx5_core The software versions are the following ones: root@dut3:~# cat /etc/debian_version 11.9 root@dut3:~# uname -r 5.10.0-27-amd64 root@dut3:~# apt list dpdk Listing... Done dpdk/oldstable,now 20.11.10-1~deb11u1 amd64 [installed] I also tried using the uio_pci_generic instead of vfio-pci, and the result was the same. However, everything works fine with an X540-AT2 NIC. Please advise me, how to resolve the issue! As you may have noticed, there is another Mellanox NIC in the server, but with that one, the situation is even worse. The two ports have the same PCI address, and thus I cannot bind a DPDK compatible drive to its second port. Here is the dpdk-devbind output for the card: 0000:82:00.0 'MT27520 Family [ConnectX-3 Pro] 1007' if=enp130s0d1,enp130s0 drv=mlx4_core unused=vfio-pci,uio_pci_generic Your advice for resolving that issue would also be helpful to me. :-) Thank you very much in advance! Best regards, Gábor Lencse --------------VSdfl0K6NN5tZqPk2k0w0q4Y Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: 8bit

Hi Folks,

I am trying to run dpdk-testpmd on a Dell PowerEdge R730 server with Mellanox ConnectX-4 Lx NIC card. I can bind the vfio-pci driver:

root@dut3:~# dpdk-devbind.py --status

Network devices using DPDK-compatible driver
============================================
0000:05:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci unused=mlx5_core,uio_pci_generic
0000:05:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' drv=vfio-pci unused=mlx5_core,uio_pci_generic

But EAL gives the following messages:

root@dut3:~# dpdk-testpmd
EAL: Detected 16 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.0 (socket 0)
mlx5_pci: no Verbs device matches PCI device 0000:05:00.0, are kernel drivers loaded?
common_mlx5: Failed to load driver = mlx5_pci.

EAL: Requested device 0000:05:00.0 cannot be used
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:05:00.1 (socket 0)
mlx5_pci: no Verbs device matches PCI device 0000:05:00.1, are kernel drivers loaded?
common_mlx5: Failed to load driver = mlx5_pci.

EAL: Requested device 0000:05:00.1 cannot be used
EAL: Probe PCI driver: net_mlx4 (15b3:1007) device: 0000:82:00.0 (socket 1)
testpmd: create a new mbuf pool <mb_pool_0>: n=235456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=235456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
net_mlx4: 0x55e85e1b4b40: cannot attach flow rules (code 95, "Operation not supported"), flow error type 2, cause 0x220040d200, message: flow rule rejected by device
Fail to start port 0
Configuring Port 1 (socket 1)
net_mlx4: 0x55e85e1b8c00: cannot attach flow rules (code 95, "Operation not supported"), flow error type 2, cause 0x220040cd80, message: flow rule rejected by device
Fail to start port 1
Please stop the ports first
Done
No commandline core given, start packet forwarding
Not all ports were started
Press enter to exit

It seems that indeed there is no such driver as "mlx_pci", as the 'find / -name "*mlx5_pci*" '  command gives no results.

Do I need to install something?

I tried doing some Google search for 'common_mlx5: Failed to load driver = mlx5_pci.', but the hits did not help. As for the first hit ( https://inbox.dpdk.org/users/CAE4=sSdsN7_CFMOS5ZF-3fEBLhN2af8+twJcO2t4xadNwTV68w@mail.gmail.com/T/ ), the result of the recommended checking is the following:

root@dut3:~# lsmod | grep mlx5_
mlx5_ib               385024  0
ib_uverbs             167936  3 mlx4_ib,rdma_ucm,mlx5_ib
ib_core               413696  11 rdma_cm,ib_ipoib,rpcrdma,mlx4_ib,iw_cm,ib_iser,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
mlx5_core            1183744  1 mlx5_ib
mlxfw                  32768  1 mlx5_core
ptp                    32768  4 igb,mlx4_en,mlx5_core,ixgbe
pci_hyperv_intf        16384  1 mlx5_core

The software versions are the following ones:

root@dut3:~# cat /etc/debian_version
11.9
root@dut3:~# uname -r
5.10.0-27-amd64
root@dut3:~# apt list dpdk
Listing... Done
dpdk/oldstable,now 20.11.10-1~deb11u1 amd64 [installed]

I also tried using the uio_pci_generic instead of vfio-pci, and the result was the same.

However, everything works fine with an X540-AT2 NIC.

Please advise me, how to resolve the issue!

As you may have noticed, there is another Mellanox NIC in the server, but with that one, the situation is even worse. The two ports have the same PCI address, and thus I cannot bind a DPDK compatible drive to its second port. Here is the dpdk-devbind output for the card:

0000:82:00.0 'MT27520 Family [ConnectX-3 Pro] 1007' if=enp130s0d1,enp130s0 drv=mlx4_core unused=vfio-pci,uio_pci_generic

Your advice for resolving that issue would also be helpful to me. :-)

Thank you very much in advance!

Best regards,

Gábor Lencse

--------------VSdfl0K6NN5tZqPk2k0w0q4Y--