DPDK usage discussions
 help / color / mirror / Atom feed
From: Muhammad Zain-ul-Abideen <zain2294@gmail.com>
To: Asaf Penso <asafp@nvidia.com>
Cc: Rocio Dominguez <rocio.dominguez@ericsson.com>,
	 "NBU-Contact-Thomas Monjalon (EXTERNAL)" <thomas@monjalon.net>,
	Ferruh Yigit <ferruh.yigit@intel.com>,
	 Qi Zhang <qi.z.zhang@intel.com>, users <users@dpdk.org>,
	Matan Azrad <matan@nvidia.com>,
	 Slava Ovsiienko <viacheslavo@nvidia.com>,
	Raslan Darawsheh <rasland@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on the multiple IB devices
Date: Fri, 4 Feb 2022 19:55:12 +0500	[thread overview]
Message-ID: <CAN7yQ2qrZ_nfO7YikaCab9RnjbN0rVDZNWD86qOYEMZvvkF+0Q@mail.gmail.com> (raw)
In-Reply-To: <DM5PR1201MB2555AD3B37F4F51DE412A512CD299@DM5PR1201MB2555.namprd12.prod.outlook.com>

[-- Attachment #1: Type: text/plain, Size: 34282 bytes --]

Was the mlx card on cpu2?

On Fri, Feb 4, 2022, 7:09 PM Asaf Penso <asafp@nvidia.com> wrote:

> Great, thanks for the update.
> I think the community can benefit if you try with Mellanox NIC and find an
> issue that will be resolved.
>
> Regards,
> Asaf Penso
> ------------------------------
> *From:* Rocio Dominguez <rocio.dominguez@ericsson.com>
> *Sent:* Friday, February 4, 2022 2:54:20 PM
> *To:* Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>; Ferruh Yigit <ferruh.yigit@intel.com>;
> Qi Zhang <qi.z.zhang@intel.com>
> *Cc:* users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsheh <
> rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
> Hi Asaf,
>
>
>
> Finally I solved the problem with Intel NICs. I am using Dual NUMA, and I
> realized that my application is using cpus from NUMA 0 while I was
> assigning a NIC from NUMA 1. Using a NIC from NUMA 0 solved the problem.
>
>
>
> I don’t know if the problem with Mellanox NICs could be solved in the same
> way. But for the moment, we will use Intel NICs.
>
>
>
> Thanks,
>
>
>
> Rocío
>
>
>
> *From:* Asaf Penso <asafp@nvidia.com>
> *Sent:* Thursday, February 3, 2022 11:50 AM
> *To:* Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas
> Monjalon (EXTERNAL) <thomas@monjalon.net>; Ferruh Yigit <
> ferruh.yigit@intel.com>; Qi Zhang <qi.z.zhang@intel.com>
> *Cc:* users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> Hello Rocio,
>
>
>
> For Intel’s NIC it would be better to take it with @Ferruh Yigit
> <ferruh.yigit@intel.com>/@Qi Zhang <qi.z.zhang@intel.com>
>
> For Nvidia’s let’s continue together.
>
>
>
> Regards,
>
> Asaf Penso
>
>
>
> *From:* Rocio Dominguez <rocio.dominguez@ericsson.com>
> *Sent:* Thursday, February 3, 2022 12:30 PM
> *To:* Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> Hi Asaf,
>
>
>
> We have replaced the Mellanox NICs by Intel NICs trying to avoid this
> problem, but it’s not working also, this time with the following error:
>
>
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"8"},"message":"[add_pio_pci_devices_from_env_to_config]
> pci device from PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK=0000:d8:02.1 found"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pktio_libpio_init]
> CTRL: requesting 1024 MiB of hugepage memory for DPDK"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> USER1: DPDK version: DPDK 20.08.0"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --file-prefix
> pio-0 --master-lcore=4 --lcores=4@(4) --pci-whitelist 0000:d8:02.1
> --base-virtaddr=0x200000000 --legacy-mem --no-shconf "}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Detected 96 lcore(s)"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Detected 2 NUMA nodes"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Selected IOVA mode 'VA'"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
> found for that size"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Probing VFIO support..."}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: VFIO support initialized"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL:   using IOMMU type 1 (Type 1)"}
>
> *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:d8:02.1 (socket
> 1)"}*
>
> *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Releasing pci mapped resource for 0000:d8:02.1"}*
>
> *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40000000"}*
>
> *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40010000"}*
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Requested device 0000:d8:02.1 cannot be used"}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> EAL: Bus (pci) probe failed."}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pio]
> USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."}
>
> {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pktio_libpio_init]
> No network ports could be enabled!"}
>
>
>
> As using Intel NICs, now I have create the VFs and bind them to vfio-pci
> driver
>
>
>
> pcgwpod009-c04:~ # dpdk-devbind --status
>
>
>
> N Network devices using DPDK-compatible driver
>
> ============================================
>
> 0000:d8:02.0 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci
> unused=iavf
>
> 0000:d8:02.1 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci
> unused=iavf
>
> 0000:d8:02.2 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci
> unused=iavf
>
> 0000:d8:02.3 'Ethernet Virtual Function 700 Series 154c' drv=vfio-pci
> unused=iavf
>
>
>
> Network devices using kernel driver
>
> ===================================
>
> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=em1 drv=ixgbe
> unused=vfio-pci
>
> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=em2 drv=ixgbe
> unused=vfio-pci
>
> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=em3 drv=ixgbe
> unused=vfio-pci
>
> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=em4 drv=ixgbe
> unused=vfio-pci
>
> 0000:3b:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p1p1
> drv=i40e unused=vfio-pci
>
> 0000:3b:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p1p2
> drv=i40e unused=vfio-pci
>
> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p3p1
> drv=ixgbe unused=vfio-pci
>
> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p3p2
> drv=ixgbe unused=vfio-pci
>
> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=p3p1_0
> drv=ixgbevf unused=vfio-pci
>
> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=p3p1_1
> drv=ixgbevf unused=vfio-pci
>
> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=
> drv=ixgbevf unused=vfio-pci
>
> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' if=
> drv=ixgbevf unused=vfio-pci
>
> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p4p1
> drv=ixgbe unused=vfio-pci
>
> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=p4p2
> drv=ixgbe unused=vfio-pci
>
> 0000:d8:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p8p1
> drv=i40e unused=vfio-pci
>
> 0000:d8:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=p8p2
> drv=i40e unused=vfio-pci
>
>
>
> The interfaces are up:
>
>
>
> pcgwpod009-c04:~ # ip link show dev p8p1
>
> 290: p8p1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
> mode DEFAULT group default qlen 1000
>
>     link/ether 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff
>
>     vf 0     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof
> checking on, link-state auto, trust off
>
>     vf 1     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof
> checking on, link-state auto, trust off
>
>     vf 2     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof
> checking on, link-state auto, trust off
>
>     vf 3     link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof
> checking on, link-state auto, trust off
>
> pcgwpod009-c04:~ #
>
>
>
> The testpmd is working:
>
>
>
> pcgwpod009-c04:~ # testpmd -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2
> -w d8:02.3 -- --rxq=2 --txq=2 -i
>
> EAL: Detected 96 lcore(s)
>
> EAL: Detected 2 NUMA nodes
>
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>
> EAL: Selected IOVA mode 'VA'
>
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
> found for that size
>
> EAL: Probing VFIO support...
>
> EAL: VFIO support initialized
>
> EAL: PCI device 0000:d8:02.0 on NUMA socket 1
>
> EAL:   probe driver: 8086:154c net_i40e_vf
>
> EAL:   using IOMMU type 1 (Type 1)
>
> EAL: PCI device 0000:d8:02.1 on NUMA socket 1
>
> EAL:   probe driver: 8086:154c net_i40e_vf
>
> EAL: PCI device 0000:d8:02.2 on NUMA socket 1
>
> EAL:   probe driver: 8086:154c net_i40e_vf
>
> EAL: PCI device 0000:d8:02.3 on NUMA socket 1
>
> EAL:   probe driver: 8086:154c net_i40e_vf
>
> Interactive-mode selected
>
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176,
> socket=0
>
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=203456, size=2176,
> socket=1
>
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Configuring Port 0 (socket 1)
>
> Port 0: FE:72:DB:BE:05:EF
>
> Configuring Port 1 (socket 1)
>
> Port 1: 5E:C5:3E:86:1A:84
>
> Configuring Port 2 (socket 1)
>
> Port 2: 42:F0:5D:B0:1F:B3
>
> Configuring Port 3 (socket 1)
>
> Port 3: 46:00:42:2F:A2:DE
>
> Checking link statuses...
>
> Done
>
> testpmd>
>
>
>
> Any idea on what could be causing the error this time?
>
>
>
> Thanks,
>
>
>
> Rocío
>
>
>
> *From:* Asaf Penso <asafp@nvidia.com>
> *Sent:* Monday, January 31, 2022 6:02 PM
> *To:* Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas
> Monjalon (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> *Subject:* Re: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> We'll need to check, but how do you want to proceed?
>
> You either need 19.11 LTS or 20.11 LTS to work properly.
>
>
>
> Regards,
>
> Asaf Penso
> ------------------------------
>
> *From:* Rocio Dominguez <rocio.dominguez@ericsson.com>
> *Sent:* Monday, January 31, 2022 2:01:43 PM
> *To:* Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsheh <
> rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> Hi Asaf,
>
>
>
> Yes, it seems that DPDK version 20.08 code is built-in with the VNF I’m
> deploying, so it is always using this version, which apparently doesn’t
> have the patch that overrides this error.
>
>
>
> I think the patch is the following:
>
>
> https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-ophirmu@mellanox.com/
> <https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fprotect2.fireeye.com%2Fv1%2Furl%3Fk%3D31323334-501d5122-313273af-454445555731-487e56eb04c55cc6%26q%3D1%26e%3D5f658968-17f8-498f-9acc-81014ffbff4c%26u%3Dhttps%253A%252F%252Fnam11.safelinks.protection.outlook.com%252F%253Furl%253Dhttps%25253A%25252F%25252Fprotect2.fireeye.com%25252Fv1%25252Furl%25253Fk%25253D31323334-501d5122-313273af-454445555731-cfe08b886ec13ca0%252526q%25253D1%252526e%25253Da23c8b9f-0b11-4341-81c9-8931cfbdf633%252526u%25253Dhttps%2525253A%2525252F%2525252Fpatches.dpdk.org%2525252Fproject%2525252Fdpdk%2525252Fpatch%2525252F20200603150602.4686-7-ophirmu%25252540mellanox.com%2525252F%2526data%253D04%25257C01%25257Casafp%252540nvidia.com%25257C0b6b2b393538425ee41308d9e70033bd%25257C43083d15727340c1b7db39efd9ccc17a%25257C0%25257C0%25257C637794810376687733%25257CUnknown%25257CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%25253D%25257C3000%2526sdata%253D%25252BPjs73ra1Ng3eNj2Qcjs75FMfeWWr%25252F8R48SFx2f%25252BCM4%25253D%2526reserved%253D0&data=04%7C01%7Casafp%40nvidia.com%7C1b38d6419d3143a3ab4608d9e7dd7816%7C43083d15727340c1b7db39efd9ccc17a%7C0%7C0%7C637795760671604452%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=E9OFPSnMUQamrsjxPezWD7Dthg%2B71rfAd8o%2FRv1PJyU%3D&reserved=0>
>
>
>
> and the code part that solves the error is:
>
> +       if (mlx5_class_get(pci_dev->device.devargs) != MLX5_CLASS_NET) {
>
> +                DRV_LOG(DEBUG, "Skip probing - should be probed by other
> mlx5"
>
> +                        " driver.");
>
> +                return 1;
>
> +       }
>
> Could you please confirm?
>
>
>
> Thanks,
>
>
>
> Rocío
>
>
>
> *From:* Asaf Penso <asafp@nvidia.com>
> *Sent:* Monday, January 31, 2022 12:49 PM
> *To:* Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas
> Monjalon (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> I see two differences below.
>
> First, in testpmd the version is 19.11.11, and in your application, it’s
> 20.08. See this print:
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> USER1: DPDK version: DPDK 20.08.0"}
>
>
>
> Second, in your application, I see the VFIO driver is not started properly:
>
> 20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL:   cannot open VFIO container, error 2 (No such file or directory)"}
>
>
>
> Regards,
>
> Asaf Penso
>
>
>
> *From:* Rocio Dominguez <rocio.dominguez@ericsson.com>
> *Sent:* Thursday, January 20, 2022 9:49 PM
> *To:* Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> Hi Asaf,
>
>
>
> I have manually compile and install the DPDK 19.11.11.
>
>
>
> Executing testpmd in the Mellanox NICs VFs where I want to run my app
> gives this result:
>
>
>
> pcgwpod009-c04:~/dpdk-stable-19.11.11 #
> ./x86_64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 -w
> d8:00.4 -w d8:00.5 -- --rxq=2 --txq=2 -i
>
> EAL: Detected 96 lcore(s)
>
> EAL: Detected 2 NUMA nodes
>
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>
> EAL: Selected IOVA mode 'VA'
>
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
> found for that size
>
> EAL: Probing VFIO support...
>
> EAL: VFIO support initialized
>
> EAL: PCI device 0000:d8:00.2 on NUMA socket 1
>
> EAL:   probe driver: 15b3:1014 net_mlx5
>
> EAL: PCI device 0000:d8:00.3 on NUMA socket 1
>
> EAL:   probe driver: 15b3:1014 net_mlx5
>
> EAL: PCI device 0000:d8:00.4 on NUMA socket 1
>
> EAL:   probe driver: 15b3:1014 net_mlx5
>
> EAL: PCI device 0000:d8:00.5 on NUMA socket 1
>
> EAL:   probe driver: 15b3:1014 net_mlx5
>
> Interactive-mode selected
>
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=203456, size=2176,
> socket=0
>
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=203456, size=2176,
> socket=1
>
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Configuring Port 0 (socket 1)
>
> Port 0: 36:FE:F0:D2:90:27
>
> Configuring Port 1 (socket 1)
>
> Port 1: 72:AC:33:BF:0A:FA
>
> Configuring Port 2 (socket 1)
>
> Port 2: 1E:8D:81:60:43:E0
>
> Configuring Port 3 (socket 1)
>
> Port 3: C2:3C:EA:94:06:B4
>
> Checking link statuses...
>
> Done
>
> testpmd>
>
>
>
> But when I run my Data Plane app, the result is
>
>
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[pktio_linux_packet_mmap_setup]
> block_size: 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384,
> mem_size: 67108864"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_libpio_init]
> CTRL: pci devices added: 1, vhost user devices added: 0"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"message":"[add_pio_pci_devices_from_env_to_config]
> pci device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=0000:d8:00.5
> found"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_libpio_init]
> CTRL: requesting 1024 MiB of hugepage memory for DPDK"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> USER1: DPDK version: DPDK 20.08.0"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --file-prefix
> pio-0 --master-lcore=4 --lcores=4@(4) --pci-whitelist 0000:d8:00.5
> --base-virtaddr=0x200000000 --iova-mode=va --legacy-mem --no-shconf "}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Detected 96 lcore(s)"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Detected 2 NUMA nodes"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Selected IOVA mode 'VA'"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs
> found for that size"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Probing VFIO support..."}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL:   cannot open VFIO container, error 2 (No such file or directory)"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: VFIO support could not be initialized"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.5 (socket
> 1)"}
>
> *{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> net_mlx5: unable to recognize master/representors on the multiple IB
> devices"}*
>
> *{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> common_mlx5: Failed to load driver = net_mlx5."}*
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Requested device 0000:d8:00.5 cannot be used"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> EAL: Bus (pci) probe failed."}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pio]
> USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_libpio_init]
> No network ports could be enabled!"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu]
> libpio packet module is NOT initialized"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu]
> pktsock packet module is NOT initialized"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu]
> linux packet module is initialized"}
>
> {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"message":"[pktio_init_cpu]
> tap packet module is NOT initialized"}
>
>
>
> Any idea on what could be the problem?
>
>
>
> Thanks,
>
>
>
> Rocío
>
>
>
>
>
> *From:* Asaf Penso <asafp@nvidia.com>
> *Sent:* Thursday, January 20, 2022 8:17 AM
> *To:* Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas
> Monjalon (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> *Subject:* Re: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> Although inbox drivers come with a pre installed DPDK, you can manually
> download, compile, install, and work with whatever version you wish.
>
>
>
> Let us know the results, and we'll continue from there.
>
>
>
> Regards,
>
> Asaf Penso
> ------------------------------
>
> *From:* Rocio Dominguez <rocio.dominguez@ericsson.com>
> *Sent:* Monday, January 17, 2022 10:20:58 PM
> *To:* Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon
> (EXTERNAL) <thomas@monjalon.net>
> *Cc:* users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsheh <
> rasland@nvidia.com>
> *Subject:* RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
>
>
> Hi Asaf,
>
> Thanks for the prompt answer.
>
> I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE
> repositories the corresponding RPM package for SLES 15 SP2 is not
> available, the latest one is DPDK 19.11.10.
>
> I have installed it but the problem persists. It's probably solved in
> 19.11.11.
>
> There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also,
> not sure if it could be a problem to install it in SLES 15 SP2. I will try
> it anyway.
>
> Also I will try to find other way to load the 19.11.11 in SLES 15 SP2
> apart from using RPM or zipper, any suggestion is appreciated.
>
> Thanks,
>
> Rocío
>
> -----Original Message-----
> From: Asaf Penso <asafp@nvidia.com>
> Sent: Sunday, January 16, 2022 4:31 PM
> To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Rocio
> Dominguez <rocio.dominguez@ericsson.com>
> Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <
> viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> Subject: RE: net_mlx5: unable to recognize master/representors on the
> multiple IB devices
>
> Hello Rocio,
> IIRC, there was a fix in a recent stable version.
> Would you please try taking latest 19.11 LTS and tell whether you still
> see the issue?
>
> Regards,
> Asaf Penso
>
> >-----Original Message-----
> >From: Thomas Monjalon <thomas@monjalon.net>
> >Sent: Sunday, January 16, 2022 3:24 PM
> >To: Rocio Dominguez <rocio.dominguez@ericsson.com>
> >Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko
> ><viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
> >Subject: Re: net_mlx5: unable to recognize master/representors on the
> >multiple IB devices
> >
> >+Cc mlx5 experts
> >
> >
> >14/01/2022 11:10, Rocio Dominguez:
> >> Hi,
> >>
> >> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.
> >>
> >> I'm using:
> >>
> >> OS SLES 15 SP2
> >> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
> >> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
> >> Mellanox adapters firmware 12.28.2006 (corresponding to this
> >> MLNX_OFED version) kernel 5.3.18-24.34-default
> >>
> >>
> >> This is my SRIOV configuration for DPDK capable PCI slots:
> >>
> >>             {
> >>                 "resourceName": "mlnx_sriov_netdevice",
> >>                 "resourcePrefix": "mellanox.com",
> >>                 "isRdma": true,
> >>                 "selectors": {
> >>                     "vendors": ["15b3"],
> >>                     "devices": ["1014"],
> >>                     "drivers": ["mlx5_core"],
> >>                     "pciAddresses": ["0000:d8:00.2", "0000:d8:00.3",
> >> "0000:d8:00.4",
> >"0000:d8:00.5"],
> >>                     "isRdma": true
> >>                 }
> >>
> >> The sriov device plugin starts without problems, the devices are
> >> correctly
> >allocated:
> >>
> >> {
> >>   "cpu": "92",
> >>   "ephemeral-storage": "419533922385",
> >>   "hugepages-1Gi": "8Gi",
> >>   "hugepages-2Mi": "4Gi",
> >>   "intel.com/intel_sriov_dpdk": "0",
> >>   "intel.com/sriov_cre": "3",
> >>   "mellanox.com/mlnx_sriov_netdevice": "4",
> >>   "mellanox.com/sriov_dp": "0",
> >>   "memory": "183870336Ki",
> >>   "pods": "110"
> >> }
> >>
> >> The Mellanox NICs are binded to the kernel driver mlx5_core:
> >>
> >> pcgwpod009-c04:~ # dpdk-devbind --status
> >>
> >> Network devices using kernel driver
> >> ===================================
> >> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=em1 drv=ixgbe
> >> unused=vfio-pci
> >> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=em2 drv=ixgbe
> >> unused=vfio-pci
> >> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=em3 drv=ixgbe
> >> unused=vfio-pci
> >> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=em4 drv=ixgbe
> >> unused=vfio-pci
> >> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp59s0f0
> >> drv=mlx5_core unused=vfio-pci
> >> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp59s0f1
> >> drv=mlx5_core unused=vfio-pci
> >> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> >> if=p3p1 drv=ixgbe unused=vfio-pci
> >> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> >> if=p3p2 drv=ixgbe unused=vfio-pci
> >> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=
> >> drv=ixgbevf unused=vfio-pci
> >> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed'
> >> if=p3p1_1 drv=ixgbevf unused=vfio-pci
> >> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=
> >> drv=ixgbevf unused=vfio-pci
> >> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed'
> >> if=p3p1_3 drv=ixgbevf unused=vfio-pci
> >> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> >> if=p4p1 drv=ixgbe unused=vfio-pci
> >> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'
> >> if=p4p2 drv=ixgbe unused=vfio-pci
> >> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=enp216s0f0
> >> drv=mlx5_core unused=vfio-pci
> >> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=enp216s0f1
> >> drv=mlx5_core unused=vfio-pci
> >> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
> >> if=enp216s0f2 drv=mlx5_core unused=vfio-pci
> >> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
> >> if=enp216s0f3 drv=mlx5_core unused=vfio-pci
> >> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
> >> if=enp216s0f4 drv=mlx5_core unused=vfio-pci
> >> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014'
> >> if=enp216s0f5 drv=mlx5_core unused=vfio-pci
> >>
> >> The interfaces are up:
> >>
> >> pcgwpod009-c04:~ # ibdev2netdev -v
> >> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4
> >QSFP28
> >> fw 12.28.2006 port 1 (ACTIVE) ==> enp59s0f0 (Up)
> >> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4
> >QSFP28
> >> fw 12.28.2006 port 1 (ACTIVE) ==> enp59s0f1 (Up)
> >> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4
> >QSFP28
> >> fw 12.28.2006 port 1 (ACTIVE) ==> enp216s0f0 (Up)
> >> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4
> >QSFP28
> >> fw 12.28.2006 port 1 (ACTIVE) ==> enp216s0f1 (Up)
> >> 0000:d8:00.2 mlx5_4 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
> >> enp216s0f2 (Up)
> >> 0000:d8:00.3 mlx5_5 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
> >> enp216s0f3 (Up)
> >> 0000:d8:00.4 mlx5_6 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
> >> enp216s0f4 (Up)
> >> 0000:d8:00.5 mlx5_7 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTIVE) ==>
> >> enp216s0f5 (Up) pcgwpod009-c04:~ #
> >>
> >>
> >> But when I run my application the Mellanox adapters are probed and I
> >obtain the following error:
> >>
> >> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci
> >> (15b3:1014) device: 0000:d8:00.4 (socket 1)"}
> >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
> >> i
> >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
> >> "6"},"message":"[pio] net_mlx5: unable to recognize
> >> master/representors on the multiple IB devices"}
> >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
> >> i
> >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
> >> "6"},"message":"[pio] common_mlx5: Failed to load driver =
> >> net_mlx5."}
> >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
> >> i
> >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
> >> "6"},"message":"[pio] EAL: Requested device 0000:d8:00.4 cannot be
> >> used"}
> >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever
> >> i
> >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
> >> "6"},"message":"[pio] EAL: Bus (pci) probe failed."}
> >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever
> >> i
> >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":
> >> "6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports,
> >> actual 0 ports."}
> >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever
> >> i
> >> ty":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id"
> >> :"6"},"message":"[pktio_libpio_init] No network ports could be
> >> enabled!"}
> >>
> >> Could you please help me with this issue?
> >>
> >>
> >> Thanks,
> >>
> >> Rocío
> >>
> >
> >
> >
> >
>

[-- Attachment #2: Type: text/html, Size: 58385 bytes --]

  reply	other threads:[~2022-02-04 14:55 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-14 10:10 Rocio Dominguez
2022-01-16 13:23 ` Thomas Monjalon
2022-01-16 15:30   ` Asaf Penso
2022-01-17 20:20     ` Rocio Dominguez
2022-01-20  7:16       ` Asaf Penso
2022-01-20 19:48         ` Rocio Dominguez
2022-01-31 11:49           ` Asaf Penso
2022-01-31 12:01             ` Rocio Dominguez
2022-01-31 17:02               ` Asaf Penso
2022-02-03 10:30                 ` Rocio Dominguez
2022-02-03 10:49                   ` Asaf Penso
2022-02-04 12:54                     ` Rocio Dominguez
2022-02-04 14:09                       ` Asaf Penso
2022-02-04 14:55                         ` Muhammad Zain-ul-Abideen [this message]
2022-02-07 17:14                           ` Rocio Dominguez

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAN7yQ2qrZ_nfO7YikaCab9RnjbN0rVDZNWD86qOYEMZvvkF+0Q@mail.gmail.com \
    --to=zain2294@gmail.com \
    --cc=asafp@nvidia.com \
    --cc=ferruh.yigit@intel.com \
    --cc=matan@nvidia.com \
    --cc=qi.z.zhang@intel.com \
    --cc=rasland@nvidia.com \
    --cc=rocio.dominguez@ericsson.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).