DPDK usage discussions
 help / color / mirror / Atom feed
From: Rocio Dominguez <rocio.dominguez@ericsson.com>
To: Asaf Penso <asafp@nvidia.com>,
	"NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>
Cc: "users@dpdk.org" <users@dpdk.org>, Matan Azrad <matan@nvidia.com>,
	Slava Ovsiienko <viacheslavo@nvidia.com>,
	Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices
Date: Thu, 3 Feb 2022 10:30:20 +0000	[thread overview]
Message-ID: <AM5PR0701MB2324FE738BB80C2B13F20C2293289@AM5PR0701MB2324.eurprd07.prod.outlook.com> (raw)
In-Reply-To: <DM5PR1201MB2555C04807E5231338B5C1E3CD259@DM5PR1201MB2555.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 30215 bytes --]

Hi Asaf,

We have replaced the Mellanox NICs by Intel NICs trying to avoid this problem, but it's not working also, this time with the following error:

{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"8"},"message":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK=0000:d8:02.1 found"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for DPDK"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] USER1: DPDK version: DPDK 20.08.0"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=4 --lcores=4@(4) --pci-whitelist 0000:d8:02.1 --base-virtaddr=0x200000000 --legacy-mem --no-shconf "}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Detected 96 lcore(s)"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Detected 2 NUMA nodes"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Selected IOVA mode 'VA'"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Probing VFIO support..."}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: VFIO support initialized"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL:   using IOMMU type 1 (Type 1)"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:d8:02.1 (socket 1)"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Releasing pci mapped resource for 0000:d8:02.1"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40000000"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40010000"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Requested device 0000:d8:02.1 cannot be used"}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] EAL: Bus (pci) probe failed."}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."}
{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pktio_libpio_init] No network ports could be enabled!"}

As using Intel NICs, now I have create the VFs and bind them to vfio-pci driver

pcgwpod009-c04:~ # dpdk-devbind --status

N Network devices using DPDK-compatible driver
============================================
0000:d8:02.0 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci unused=iavf
0000:d8:02.1 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci unused=iavf
0000:d8:02.2 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci unused=iavf
0000:d8:02.3 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci unused=iavf

Network devices using kernel driver
===================================
0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=em1 drv=ixgbe unused=vfio-pci
0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=em2 drv=ixgbe unused=vfio-pci
0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=em3 drv=ixgbe unused=vfio-pci
0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=em4 drv=ixgbe unused=vfio-pci
0000:3b:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p1p1 drv=i40e unused=vfio-pci
0000:3b:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p1p2 drv=i40e unused=vfio-pci
0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p3p1 drv=ixgbe unused=vfio-pci
0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p3p2 drv=ixgbe unused=vfio-pci
0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=p3p1_0 drv=ixgbevf unused=vfio-pci
0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=p3p1_1 drv=ixgbevf unused=vfio-pci
0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if= drv=ixgbevf unused=vfio-pci
0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' if= drv=ixgbevf unused=vfio-pci
0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p4p1 drv=ixgbe unused=vfio-pci
0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p4p2 drv=ixgbe unused=vfio-pci
0000:d8:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p8p1 drv=i40e unused=vfio-pci
0000:d8:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p8p2 drv=i40e unused=vfio-pci

The interfaces are up:

pcgwpod009-c04:~ # ip link show dev p8p1
290: p8p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
    link/ether 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
    vf 1     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
    vf 2     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
    vf 3     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, link-state auto, trust off
pcgwpod009-c04:~ #

The testpmd is working:

pcgwpod009-c04:~ # testpmd -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2 -w d8:02.3 -- --rxq=2 --txq=2 -i
EAL: Detected 96 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:d8:02.0 on NUMA socket 1
EAL:   probe driver: 8086:154c net_i40e_vf
EAL:   using IOMMU type 1 (Type 1)
EAL: PCI device 0000:d8:02.1 on NUMA socket 1
EAL:   probe driver: 8086:154c net_i40e_vf
EAL: PCI device 0000:d8:02.2 on NUMA socket 1
EAL:   probe driver: 8086:154c net_i40e_vf
EAL: PCI device 0000:d8:02.3 on NUMA socket 1
EAL:   probe driver: 8086:154c net_i40e_vf
Interactive-mode selected
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=203456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 1)
Port 0: FE:72:DB:BE:05:EF
Configuring Port 1 (socket 1)
Port 1: 5E:C5:3E:86:1A:84
Configuring Port 2 (socket 1)
Port 2: 42:F0:5D:B0:1F:B3
Configuring Port 3 (socket 1)
Port 3: 46:00:42:2F:A2:DE
Checking link statuses...
Done
testpmd>

Any idea on what could be causing the error this time?

Thanks,

Rocío

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 6:02 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on the multiple IB devices

We'll need to check, but how do you want to proceed?
You either need 19.11 LTS or 20.11 LTS to work properly.

Regards,
Asaf Penso
________________________________
From: Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>
Sent: Monday, January 31, 2022 2:01:43 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Cc: users@dpdk.org<mailto:users@dpdk.org> <users@dpdk.org<mailto:users@dpdk.org>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices


Hi Asaf,



Yes, it seems that DPDK version 20.08 code is built-in with the VNF I'm deploying, so it is always using this version, which apparently doesn't have the patch that overrides this error.



I think the patch is the following:

https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-ophirmu@mellanox.com/<https://protect2.fireeye.com/v1/url?k=31323334-501d5122-313273af-454445555731-cfe08b886ec13ca0&q=1&e=a23c8b9f-0b11-4341-81c9-8931cfbdf633&u=https%3A%2F%2Fpatches.dpdk.org%2Fproject%2Fdpdk%2Fpatch%2F20200603150602.4686-7-ophirmu%40mellanox.com%2F>



and the code part that solves the error is:

+       if (mlx5_class_get(pci_dev->device.devargs) != MLX5_CLASS_NET) {

+                DRV_LOG(DEBUG, "Skip probing - should be probed by other mlx5"

+                        " driver.");

+                return 1;

+       }

Could you please confirm?



Thanks,



Rocío



From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: Monday, January 31, 2022 12:49 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices



I see two differences below.

First, in testpmd the version is 19.11.11, and in your application, it's 20.08. See this print:

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] USER1: DPDK version: DPDK 20.08.0"}



Second, in your application, I see the VFIO driver is not started properly:

20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL:   cannot open VFIO container, error 2 (No such file or directory)"}



Regards,

Asaf Penso



From: Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>
Sent: Thursday, January 20, 2022 9:49 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices



Hi Asaf,



I have manually compile and install the DPDK 19.11.11.



Executing testpmd in the Mellanox NICs VFs where I want to run my app gives this result:



pcgwpod009-c04:~/dpdk-stable-19.11.11 # ./x86_64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=2 --txq=2 -i

EAL: Detected 96 lcore(s)

EAL: Detected 2 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode 'VA'

EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:d8:00.2 on NUMA socket 1

EAL:   probe driver: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:00.3 on NUMA socket 1

EAL:   probe driver: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:00.4 on NUMA socket 1

EAL:   probe driver: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:00.5 on NUMA socket 1

EAL:   probe driver: 15b3:1014 net_mlx5

Interactive-mode selected

testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176, socket=0

testpmd: preferred mempool ops selected: ring_mp_mc

testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=203456, size=2176, socket=1

testpmd: preferred mempool ops selected: ring_mp_mc

Configuring Port 0 (socket 1)

Port 0: 36:FE:F0:D2:90:27

Configuring Port 1 (socket 1)

Port 1: 72:AC:33:BF:0A:FA

Configuring Port 2 (socket 1)

Port 2: 1E:8D:81:60:43:E0

Configuring Port 3 (socket 1)

Port 3: C2:3C:EA:94:06:B4

Checking link statuses...

Done

testpmd>



But when I run my Data Plane app, the result is



{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pktio_linux_packet_mmap_setup] block_size: 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384, mem_size: 67108864"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_libpio_init] CTRL: pci devices added: 1, vhost user devices added: 0"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=0000:d8:00.5 found"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for DPDK"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] USER1: DPDK version: DPDK 20.08.0"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=4 --lcores=4@(4) --pci-whitelist 0000:d8:00.5 --base-virtaddr=0x200000000 --iova-mode=va --legacy-mem --no-shconf "}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Detected 96 lcore(s)"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Detected 2 NUMA nodes"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Probing VFIO support..."}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL:   cannot open VFIO container, error 2 (No such file or directory)"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: VFIO support could not be initialized"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.5 (socket 1)"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] net_mlx5: unable to recognize master/representors on the multiple IB devices"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] common_mlx5: Failed to load driver = net_mlx5."}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Requested device 0000:d8:00.5 cannot be used"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] EAL: Bus (pci) probe failed."}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_libpio_init] No network ports could be enabled!"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu] libpio packet module is NOT initialized"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu] pktsock packet module is NOT initialized"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu] linux packet module is initialized"}

{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu] tap packet module is NOT initialized"}



Any idea on what could be the problem?



Thanks,



Rocío





From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: Thursday, January 20, 2022 8:17 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: Re: net_mlx5: unable to recognize master/representors on the multiple IB devices



Although inbox drivers come with a pre installed DPDK, you can manually download, compile, install, and work with whatever version you wish.



Let us know the results, and we'll continue from there.



Regards,

Asaf Penso

________________________________

From: Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>
Sent: Monday, January 17, 2022 10:20:58 PM
To: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>
Cc: users@dpdk.org<mailto:users@dpdk.org> <users@dpdk.org<mailto:users@dpdk.org>>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices



Hi Asaf,

Thanks for the prompt answer.

I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repositories the corresponding RPM package for SLES 15 SP2 is not available, the latest one is DPDK 19.11.10.

I have installed it but the problem persists. It's probably solved in 19.11.11.

There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, not sure if it could be a problem to install it in SLES 15 SP2. I will try it anyway.

Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart from using RPM or zipper, any suggestion is appreciated.

Thanks,

Rocío

-----Original Message-----
From: Asaf Penso <asafp@nvidia.com<mailto:asafp@nvidia.com>>
Sent: Sunday, January 16, 2022 4:31 PM
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net<mailto:thomas@monjalon.net>>; Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>
Cc: users@dpdk.org<mailto:users@dpdk.org>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko <viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices

Hello Rocio,
IIRC, there was a fix in a recent stable version.
Would you please try taking latest 19.11 LTS and tell whether you still see the issue?

Regards,
Asaf Penso

>-----Original Message-----
>From: Thomas Monjalon <thomas@monjalon.net<mailto:thomas@monjalon.net>>
>Sent: Sunday, January 16, 2022 3:24 PM
>To: Rocio Dominguez <rocio.dominguez@ericsson.com<mailto:rocio.dominguez@ericsson.com>>
>Cc: users@dpdk.org<mailto:users@dpdk.org>; Matan Azrad <matan@nvidia.com<mailto:matan@nvidia.com>>; Slava Ovsiienko
><viacheslavo@nvidia.com<mailto:viacheslavo@nvidia.com>>; Raslan Darawsheh <rasland@nvidia.com<mailto:rasland@nvidia.com>>
>Subject: Re: net_mlx5: unable to recognize master/representors on the
>multiple IB devices
>
>+Cc mlx5 experts
>
>
>14/01/2022 11:10, Rocio Dominguez:
>> Hi,
>>
>> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.
>>
>> I'm using:
>>
>> OS SLES 15 SP2
>> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
>> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
>> Mellanox adapters firmware 12.28.2006 (corresponding to this
>> MLNX_OFED version) kernel 5.3.18-24.34-default
>>
>>
>> This is my SRIOV configuration for DPDK capable PCI slots:
>>
>>             {
>>                 "resourceName": "mlnx_sriov_netdevice",
>>                 "resourcePrefix": "mellanox.com",
>>                 "isRdma": true,
>>                 "selectors": {
>>                     "vendors": ["15b3"],
>>                     "devices": ["1014"],
>>                     "drivers": ["mlx5_core"],
>>                     "pciAddresses": ["0000:d8:00.2", "0000:d8:00.3",
>> "0000:d8:00.4",
>"0000:d8:00.5"],
>>                     "isRdma": true
>>                 }
>>
>> The sriov device plugin starts without problems, the devices are
>> correctly
>allocated:
>>
>> {
>>   "cpu": "92",
>>   "ephemeral-storage": "419533922385",
>>   "hugepages-1Gi": "8Gi",
>>   "hugepages-2Mi": "4Gi",
>>   "intel.com/intel_sriov_dpdk": "0",
>>   "intel.com/sriov_cre": "3",
>>   "mellanox.com/mlnx_sriov_netdevice": "4",
>>   "mellanox.com/sriov_dp": "0",
>>   "memory": "183870336Ki",
>>   "pods": "110"
>> }
>>
>> The Mellanox NICs are binded to the kernel driver mlx5_core:
>>
>> pcgwpod009-c04:~ # dpdk-devbind --status
>>
>> Network devices using kernel driver
>> ===================================
>> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=em1 drv=ixgbe
>> unused=vfio-pci
>> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=em2 drv=ixgbe
>> unused=vfio-pci
>> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=em3 drv=ixgbe
>> unused=vfio-pci
>> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=em4 drv=ixgbe
>> unused=vfio-pci
>> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp59s0f0
>> drv=mlx5_core unused=vfio-pci
>> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp59s0f1
>> drv=mlx5_core unused=vfio-pci
>> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=p3p1 drv=ixgbe unused=vfio-pci
>> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=p3p2 drv=ixgbe unused=vfio-pci
>> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=
>> drv=ixgbevf unused=vfio-pci
>> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed'
>> if=p3p1_1 drv=ixgbevf unused=vfio-pci
>> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=
>> drv=ixgbevf unused=vfio-pci
>> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed'
>> if=p3p1_3 drv=ixgbevf unused=vfio-pci
>> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=p4p1 drv=ixgbe unused=vfio-pci
>> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
>> if=p4p2 drv=ixgbe unused=vfio-pci
>> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp216s0f0
>> drv=mlx5_core unused=vfio-pci
>> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp216s0f1
>> drv=mlx5_core unused=vfio-pci
>> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
>> if=enp216s0f2 drv=mlx5_core unused=vfio-pci
>> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
>> if=enp216s0f3 drv=mlx5_core unused=vfio-pci
>> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
>> if=enp216s0f4 drv=mlx5_core unused=vfio-pci
>> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
>> if=enp216s0f5 drv=mlx5_core unused=vfio-pci
>>
>> The interfaces are up:
>>
>> pcgwpod009-c04:~ # ibdev2netdev -v
>> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4
>QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) ==> enp59s0f0 (Up)
>> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4
>QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) ==> enp59s0f1 (Up)
>> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4
>QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) ==> enp216s0f0 (Up)
>> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4
>QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) ==> enp216s0f1 (Up)
>> 0000:d8:00.2 mlx5_4 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
>> enp216s0f2 (Up)
>> 0000:d8:00.3 mlx5_5 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
>> enp216s0f3 (Up)
>> 0000:d8:00.4 mlx5_6 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
>> enp216s0f4 (Up)
>> 0000:d8:00.5 mlx5_7 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
>> enp216s0f5 (Up) pcgwpod009-c04:~ #
>>
>>
>> But when I run my application the Mellanox adapters are probed and I
>obtain the following error:
>>
>> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci
>> (15b3:1014) device: 0000:d8:00.4 (socket 1)"}
>> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] net_mlx5: unable to recognize
>> master/representors on the multiple IB devices"}
>> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] common_mlx5: Failed to load driver =
>> net_mlx5."}
>> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Requested device 0000:d8:00.4 cannot be
>> used"}
>> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Bus (pci) probe failed."}
>> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports,
>> actual 0 ports."}
>> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id"
>> :"6"},"message":"[pktio_libpio_init] No network ports could be
>> enabled!"}
>>
>> Could you please help me with this issue?
>>
>>
>> Thanks,
>>
>> Rocío
>>
>
>
>
>

[-- Attachment #2: Type: text/html, Size: 58395 bytes --]

  reply	other threads:[~2022-02-04  8:06 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-14 10:10 Rocio Dominguez
2022-01-16 13:23 ` Thomas Monjalon
2022-01-16 15:30   ` Asaf Penso
2022-01-17 20:20     ` Rocio Dominguez
2022-01-20  7:16       ` Asaf Penso
2022-01-20 19:48         ` Rocio Dominguez
2022-01-31 11:49           ` Asaf Penso
2022-01-31 12:01             ` Rocio Dominguez
2022-01-31 17:02               ` Asaf Penso
2022-02-03 10:30                 ` Rocio Dominguez [this message]
2022-02-03 10:49                   ` Asaf Penso
2022-02-04 12:54                     ` Rocio Dominguez
2022-02-04 14:09                       ` Asaf Penso
2022-02-04 14:55                         ` Muhammad Zain-ul-Abideen
2022-02-07 17:14                           ` Rocio Dominguez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=AM5PR0701MB2324FE738BB80C2B13F20C2293289@AM5PR0701MB2324.eurprd07.prod.outlook.com \
    --to=rocio.dominguez@ericsson.com \
    --cc=asafp@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).