DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3
@ 2020-07-13  6:39 Zhao, Ping
  2020-07-13 14:35 ` Zhao, Ping
  2020-07-13 17:22 ` David Christensen
  0 siblings, 2 replies; 3+ messages in thread
From: Zhao, Ping @ 2020-07-13  6:39 UTC (permalink / raw)
  To: users; +Cc: Zhao, Ping, Du, Alek

Dear DPDK Users,

I met a problem with DPDK 20.05 + Mellanox CX-5 NIC. Does anyone know how to fix it? Thanks a lot!

Problem:
Mellanox CX-5 card ok with DPDK 19.11.3 but failed with DPDK 20.05.
20.05 Reports no ethernet device, ok with 19.11.3

Configuration:
Mellanox CX-5 OFED package: MLNX_OFED_LINUX-5.0-2.1.8.0-rhel8.1
DPDK 20.05
DPDK stable 19.11.3

Test:
              Test with DPDK testpmd app.

Logs:

Testpmd in DPDK 20.05
EAL: Detected 112 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: No legacy callbacks, legacy socket not created
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done

Devices:
./usertools/dpdk-devbind.py --status

Network devices using kernel driver
===================================
0000:18:00.0 'MT28800 Family [ConnectX-5 Ex] 1019' if=ens785f0 drv=mlx5_core unused=vfio-pci
0000:18:00.1 'MT28800 Family [ConnectX-5 Ex] 1019' if=ens785f1 drv=mlx5_core unused=vfio-pci
0000:3d:00.1 'Ethernet Connection X722 for 10GBASE-T 37d2' if=eno2 drv=i40e unused=vfio-pci *Active*

# ibstat
CA 'mlx5_0'
        CA type: MT4121
        Number of ports: 1
        Firmware version: 16.27.2008
        Hardware version: 0
        Node GUID: 0x0c42a103003a1298
        System image GUID: 0x0c42a103003a1298
        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 100
                Base lid: 0
                LMC: 0
                SM lid: 0
                Capability mask: 0x00010000
                Port GUID: 0x0e42a1fffe3a1298
                Link layer: Ethernet
CA 'mlx5_1'
        CA type: MT4121
        Number of ports: 1
        Firmware version: 16.27.2008
        Hardware version: 0
        Node GUID: 0x0c42a103003a1299
        System image GUID: 0x0c42a103003a1298
        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 100
                Base lid: 0
                LMC: 0
                SM lid: 0
                Capability mask: 0x00010000
                Port GUID: 0x0e42a1fffe3a1299
                Link layer: Ethernet


Thanks,
Ping



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3
  2020-07-13  6:39 [dpdk-users] Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3 Zhao, Ping
@ 2020-07-13 14:35 ` Zhao, Ping
  2020-07-13 17:22 ` David Christensen
  1 sibling, 0 replies; 3+ messages in thread
From: Zhao, Ping @ 2020-07-13 14:35 UTC (permalink / raw)
  To: users; +Cc: Du, Alek

The issue is fixed after enable the MLX5_PMD flag. Sorry to disturb!

Regards,
Ping
From: Zhao, Ping
Sent: Monday, July 13, 2020 2:40 PM
To: 'users@dpdk.org' <users@dpdk.org>
Cc: Zhao, Ping <ping.zhao@intel.com>; Du, Alek <alek.du@intel.com>
Subject: Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3

Dear DPDK Users,

I met a problem with DPDK 20.05 + Mellanox CX-5 NIC. Does anyone know how to fix it? Thanks a lot!

Problem:
Mellanox CX-5 card ok with DPDK 19.11.3 but failed with DPDK 20.05.
20.05 Reports no ethernet device, ok with 19.11.3

Configuration:
Mellanox CX-5 OFED package: MLNX_OFED_LINUX-5.0-2.1.8.0-rhel8.1
DPDK 20.05
DPDK stable 19.11.3

Test:
              Test with DPDK testpmd app.

Logs:

Testpmd in DPDK 20.05
EAL: Detected 112 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: No legacy callbacks, legacy socket not created
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=163456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Done

Devices:
./usertools/dpdk-devbind.py --status

Network devices using kernel driver
===================================
0000:18:00.0 'MT28800 Family [ConnectX-5 Ex] 1019' if=ens785f0 drv=mlx5_core unused=vfio-pci
0000:18:00.1 'MT28800 Family [ConnectX-5 Ex] 1019' if=ens785f1 drv=mlx5_core unused=vfio-pci
0000:3d:00.1 'Ethernet Connection X722 for 10GBASE-T 37d2' if=eno2 drv=i40e unused=vfio-pci *Active*

# ibstat
CA 'mlx5_0'
        CA type: MT4121
        Number of ports: 1
        Firmware version: 16.27.2008
        Hardware version: 0
        Node GUID: 0x0c42a103003a1298
        System image GUID: 0x0c42a103003a1298
        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 100
                Base lid: 0
                LMC: 0
                SM lid: 0
                Capability mask: 0x00010000
                Port GUID: 0x0e42a1fffe3a1298
                Link layer: Ethernet
CA 'mlx5_1'
        CA type: MT4121
        Number of ports: 1
        Firmware version: 16.27.2008
        Hardware version: 0
        Node GUID: 0x0c42a103003a1299
        System image GUID: 0x0c42a103003a1298
        Port 1:
                State: Active
                Physical state: LinkUp
                Rate: 100
                Base lid: 0
                LMC: 0
                SM lid: 0
                Capability mask: 0x00010000
                Port GUID: 0x0e42a1fffe3a1299
                Link layer: Ethernet


Thanks,
Ping



^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3
  2020-07-13  6:39 [dpdk-users] Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3 Zhao, Ping
  2020-07-13 14:35 ` Zhao, Ping
@ 2020-07-13 17:22 ` David Christensen
  1 sibling, 0 replies; 3+ messages in thread
From: David Christensen @ 2020-07-13 17:22 UTC (permalink / raw)
  To: Zhao, Ping, users; +Cc: Du, Alek

> Dear DPDK Users,
> 
> I met a problem with DPDK 20.05 + Mellanox CX-5 NIC. Does anyone know how to fix it? Thanks a lot!
> 
> Problem:
> Mellanox CX-5 card ok with DPDK 19.11.3 but failed with DPDK 20.05.
> 20.05 Reports no ethernet device, ok with 19.11.3

The Mellanox CX-5 poll mode driver (PMD) is not built by default in some 
configurations.  When using meson to build the framework the mlx5 
dependencies are usually detected automatically and the PMD is built 
correctly.  On the other hand, if you're building with GNU make then the 
MLX5 PMD needs to be specifically enabled by modifying a configuration 
file.  Refer to the MLX5 PMD documentation for more details:

https://doc.dpdk.org/guides/nics/mlx5.html

Dave


^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, back to index

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-13  6:39 [dpdk-users] Mellanox CX-5 failed with DPDK 20.05, but ok with DPDK 19.11.3 Zhao, Ping
2020-07-13 14:35 ` Zhao, Ping
2020-07-13 17:22 ` David Christensen

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox