From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF0E0A034F for ; Fri, 4 Feb 2022 15:55:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 680444013F; Fri, 4 Feb 2022 15:55:26 +0100 (CET) Received: from mail-ej1-f41.google.com (mail-ej1-f41.google.com [209.85.218.41]) by mails.dpdk.org (Postfix) with ESMTP id 9569540041 for ; Fri, 4 Feb 2022 15:55:25 +0100 (CET) Received: by mail-ej1-f41.google.com with SMTP id jx6so20426386ejb.0 for ; Fri, 04 Feb 2022 06:55:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=r9BKvlRWOV3bLE7RAolBX5BAi27PmWJqmJtp1hyZ6Uo=; b=kJJih97in4oiFABI/5U3CC4Tz1vaeSucyZccRqxuSbZcjXmB82NOyyAfyTUltpR30G zpgLm5b3RVKLxPSTnxvq7LEz1irbMBTD6oeU0KalnA+aPSVTHNyQkMLEtfjOxwBG92SY ACijs8rjxrIzGqFdi532tSnMK3EHAVfJJdYOgx6243JHqlkdMAjaRyMgIGkamyOGIPem dxZyeorQODwRSGzuS2VSnk4b51yg3pl9o6BdtmdEMfH0WMV1QZlvhUTsRGLqRfR/Bquq o29M7bW5Wge7M/kuxFlhZ6JwHMxeDNCup8rCo6q4W/CCso1/ANhIgpMvgXORdpCvnxfN LimQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=r9BKvlRWOV3bLE7RAolBX5BAi27PmWJqmJtp1hyZ6Uo=; b=uHz8Z5uAGNFEpvUEDo2ZgC5d2bAaTrMwDyv3w9goH/cAA9dUXBxZIKwQizuNNUacgb qMy9W8+smah8zKK//HyosMT+GB9V294Poc1dtbBM+JAP6ZVDW5USM4grIvUaijzuTH4l W194DhJ8GHifzI+8/gtMqkv9OvDxUEH8/eOMc6btEPXZzSaPu9XQadpeqiZ85m3ga3TI Cl4eGnrRNAujE9Usr9dMMsDyFFzCpKf+4c5hEMcHAS/oQcN0opcZCoQktCiSm7qL5nvO f3DdDgePl8gQlrp6AgP1MJY6Xfr77UbqW8AZh5CsQQTyIyTBwJ15VFQj09yOszbsQISe hqUw== X-Gm-Message-State: AOAM531/SR0hsLXOXxPDq+Okfb2E26KjFEEZaGaempZyvMWDHJEXmop3 Ig+26d4hJPILnw06pyNxyYll2StZ2FL+JF9CCg0= X-Google-Smtp-Source: ABdhPJyR/Yo1mW3kQ1tBinlr0APid/1MPrRGczpHPK59yV5wvTzylwiQHoSHniKRf/wELNKM1guLVQRVbObnTQGbvT0= X-Received: by 2002:a17:906:7308:: with SMTP id di8mr2895333ejc.464.1643986524877; Fri, 04 Feb 2022 06:55:24 -0800 (PST) MIME-Version: 1.0 References: <37327910.J2Yia2DhmK@thomas> In-Reply-To: From: Muhammad Zain-ul-Abideen Date: Fri, 4 Feb 2022 19:55:12 +0500 Message-ID: Subject: Re: net_mlx5: unable to recognize master/representors on the multiple IB devices To: Asaf Penso Cc: Rocio Dominguez , "NBU-Contact-Thomas Monjalon (EXTERNAL)" , Ferruh Yigit , Qi Zhang , users , Matan Azrad , Slava Ovsiienko , Raslan Darawsheh Content-Type: multipart/alternative; boundary="000000000000d4b96805d732704b" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000d4b96805d732704b Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Was the mlx card on cpu2? On Fri, Feb 4, 2022, 7:09 PM Asaf Penso wrote: > Great, thanks for the update. > I think the community can benefit if you try with Mellanox NIC and find a= n > issue that will be resolved. > > Regards, > Asaf Penso > ------------------------------ > *From:* Rocio Dominguez > *Sent:* Friday, February 4, 2022 2:54:20 PM > *To:* Asaf Penso ; NBU-Contact-Thomas Monjalon > (EXTERNAL) ; Ferruh Yigit ; > Qi Zhang > *Cc:* users@dpdk.org ; Matan Azrad ; > Slava Ovsiienko ; Raslan Darawsheh < > rasland@nvidia.com> > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > Hi Asaf, > > > > Finally I solved the problem with Intel NICs. I am using Dual NUMA, and I > realized that my application is using cpus from NUMA 0 while I was > assigning a NIC from NUMA 1. Using a NIC from NUMA 0 solved the problem. > > > > I don=E2=80=99t know if the problem with Mellanox NICs could be solved in= the same > way. But for the moment, we will use Intel NICs. > > > > Thanks, > > > > Roc=C3=ADo > > > > *From:* Asaf Penso > *Sent:* Thursday, February 3, 2022 11:50 AM > *To:* Rocio Dominguez ; NBU-Contact-Thomas > Monjalon (EXTERNAL) ; Ferruh Yigit < > ferruh.yigit@intel.com>; Qi Zhang > *Cc:* users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > Hello Rocio, > > > > For Intel=E2=80=99s NIC it would be better to take it with @Ferruh Yigit > /@Qi Zhang > > For Nvidia=E2=80=99s let=E2=80=99s continue together. > > > > Regards, > > Asaf Penso > > > > *From:* Rocio Dominguez > *Sent:* Thursday, February 3, 2022 12:30 PM > *To:* Asaf Penso ; NBU-Contact-Thomas Monjalon > (EXTERNAL) > *Cc:* users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > Hi Asaf, > > > > We have replaced the Mellanox NICs by Intel NICs trying to avoid this > problem, but it=E2=80=99s not working also, this time with the following = error: > > > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"8"},"me= ssage":"[add_pio_pci_devices_from_env_to_config] > pci device from PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK=3D0000:d8:02.1 found= "} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pktio_libpio_init] > CTRL: requesting 1024 MiB of hugepage memory for DPDK"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > USER1: DPDK version: DPDK 20.08.0"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --file-prefix > pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d8:02.1 > --base-virtaddr=3D0x200000000 --legacy-mem --no-shconf "} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: Detected 96 lcore(s)"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: Detected 2 NUMA nodes"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: Selected IOVA mode 'VA'"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: Probing VFIO support..."} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: VFIO support initialized"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: using IOMMU type 1 (Type 1)"} > > *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"m= essage":"[pio] > EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:d8:02.1 (sock= et > 1)"}* > > *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"m= essage":"[pio] > EAL: Releasing pci mapped resource for 0000:d8:02.1"}* > > *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"m= essage":"[pio] > EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40000000"}* > > *{"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"m= essage":"[pio] > EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40010000"}* > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: Requested device 0000:d8:02.1 cannot be used"} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > EAL: Bus (pci) probe failed."} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pio] > USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."} > > {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity"= :"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"m= essage":"[pktio_libpio_init] > No network ports could be enabled!"} > > > > As using Intel NICs, now I have create the VFs and bind them to vfio-pci > driver > > > > pcgwpod009-c04:~ # dpdk-devbind --status > > > > N Network devices using DPDK-compatible driver > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > 0000:d8:02.0 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci > unused=3Diavf > > 0000:d8:02.1 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci > unused=3Diavf > > 0000:d8:02.2 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci > unused=3Diavf > > 0000:d8:02.3 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci > unused=3Diavf > > > > Network devices using kernel driver > > =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > > 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe > unused=3Dvfio-pci > > 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe > unused=3Dvfio-pci > > 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe > unused=3Dvfio-pci > > 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe > unused=3Dvfio-pci > > 0000:3b:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp1p1 > drv=3Di40e unused=3Dvfio-pci > > 0000:3b:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp1p2 > drv=3Di40e unused=3Dvfio-pci > > 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp= 3p1 > drv=3Dixgbe unused=3Dvfio-pci > > 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp= 3p2 > drv=3Dixgbe unused=3Dvfio-pci > > 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3Dp3p1_= 0 > drv=3Dixgbevf unused=3Dvfio-pci > > 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=3Dp3p1_= 1 > drv=3Dixgbevf unused=3Dvfio-pci > > 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D > drv=3Dixgbevf unused=3Dvfio-pci > > 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' if=3D > drv=3Dixgbevf unused=3Dvfio-pci > > 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp= 4p1 > drv=3Dixgbe unused=3Dvfio-pci > > 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp= 4p2 > drv=3Dixgbe unused=3Dvfio-pci > > 0000:d8:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp8p1 > drv=3Di40e unused=3Dvfio-pci > > 0000:d8:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp8p2 > drv=3Di40e unused=3Dvfio-pci > > > > The interfaces are up: > > > > pcgwpod009-c04:~ # ip link show dev p8p1 > > 290: p8p1: mtu 1500 qdisc mq state UP > mode DEFAULT group default qlen 1000 > > link/ether 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff > > vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof > checking on, link-state auto, trust off > > vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof > checking on, link-state auto, trust off > > vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof > checking on, link-state auto, trust off > > vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof > checking on, link-state auto, trust off > > pcgwpod009-c04:~ # > > > > The testpmd is working: > > > > pcgwpod009-c04:~ # testpmd -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2 > -w d8:02.3 -- --rxq=3D2 --txq=3D2 -i > > EAL: Detected 96 lcore(s) > > EAL: Detected 2 NUMA nodes > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'VA' > > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size > > EAL: Probing VFIO support... > > EAL: VFIO support initialized > > EAL: PCI device 0000:d8:02.0 on NUMA socket 1 > > EAL: probe driver: 8086:154c net_i40e_vf > > EAL: using IOMMU type 1 (Type 1) > > EAL: PCI device 0000:d8:02.1 on NUMA socket 1 > > EAL: probe driver: 8086:154c net_i40e_vf > > EAL: PCI device 0000:d8:02.2 on NUMA socket 1 > > EAL: probe driver: 8086:154c net_i40e_vf > > EAL: PCI device 0000:d8:02.3 on NUMA socket 1 > > EAL: probe driver: 8086:154c net_i40e_vf > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=3D203456, size=3D= 2176, > socket=3D0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > testpmd: create a new mbuf pool : n=3D203456, size=3D= 2176, > socket=3D1 > > testpmd: preferred mempool ops selected: ring_mp_mc > > Configuring Port 0 (socket 1) > > Port 0: FE:72:DB:BE:05:EF > > Configuring Port 1 (socket 1) > > Port 1: 5E:C5:3E:86:1A:84 > > Configuring Port 2 (socket 1) > > Port 2: 42:F0:5D:B0:1F:B3 > > Configuring Port 3 (socket 1) > > Port 3: 46:00:42:2F:A2:DE > > Checking link statuses... > > Done > > testpmd> > > > > Any idea on what could be causing the error this time? > > > > Thanks, > > > > Roc=C3=ADo > > > > *From:* Asaf Penso > *Sent:* Monday, January 31, 2022 6:02 PM > *To:* Rocio Dominguez ; NBU-Contact-Thomas > Monjalon (EXTERNAL) > *Cc:* users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > *Subject:* Re: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > We'll need to check, but how do you want to proceed? > > You either need 19.11 LTS or 20.11 LTS to work properly. > > > > Regards, > > Asaf Penso > ------------------------------ > > *From:* Rocio Dominguez > *Sent:* Monday, January 31, 2022 2:01:43 PM > *To:* Asaf Penso ; NBU-Contact-Thomas Monjalon > (EXTERNAL) > *Cc:* users@dpdk.org ; Matan Azrad ; > Slava Ovsiienko ; Raslan Darawsheh < > rasland@nvidia.com> > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > Hi Asaf, > > > > Yes, it seems that DPDK version 20.08 code is built-in with the VNF I=E2= =80=99m > deploying, so it is always using this version, which apparently doesn=E2= =80=99t > have the patch that overrides this error. > > > > I think the patch is the following: > > > https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-ophirmu= @mellanox.com/ > > > > > and the code part that solves the error is: > > + if (mlx5_class_get(pci_dev->device.devargs) !=3D MLX5_CLASS_NET) = { > > + DRV_LOG(DEBUG, "Skip probing - should be probed by other > mlx5" > > + " driver."); > > + return 1; > > + } > > Could you please confirm? > > > > Thanks, > > > > Roc=C3=ADo > > > > *From:* Asaf Penso > *Sent:* Monday, January 31, 2022 12:49 PM > *To:* Rocio Dominguez ; NBU-Contact-Thomas > Monjalon (EXTERNAL) > *Cc:* users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > I see two differences below. > > First, in testpmd the version is 19.11.11, and in your application, it=E2= =80=99s > 20.08. See this print: > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > USER1: DPDK version: DPDK 20.08.0"} > > > > Second, in your application, I see the VFIO driver is not started properl= y: > > 20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-pl= ane","metadata":{"proc_id":"6"},"message":"[pio] > EAL: cannot open VFIO container, error 2 (No such file or directory)"} > > > > Regards, > > Asaf Penso > > > > *From:* Rocio Dominguez > *Sent:* Thursday, January 20, 2022 9:49 PM > *To:* Asaf Penso ; NBU-Contact-Thomas Monjalon > (EXTERNAL) > *Cc:* users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > Hi Asaf, > > > > I have manually compile and install the DPDK 19.11.11. > > > > Executing testpmd in the Mellanox NICs VFs where I want to run my app > gives this result: > > > > pcgwpod009-c04:~/dpdk-stable-19.11.11 # > ./x86_64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 = -w > d8:00.4 -w d8:00.5 -- --rxq=3D2 --txq=3D2 -i > > EAL: Detected 96 lcore(s) > > EAL: Detected 2 NUMA nodes > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'VA' > > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size > > EAL: Probing VFIO support... > > EAL: VFIO support initialized > > EAL: PCI device 0000:d8:00.2 on NUMA socket 1 > > EAL: probe driver: 15b3:1014 net_mlx5 > > EAL: PCI device 0000:d8:00.3 on NUMA socket 1 > > EAL: probe driver: 15b3:1014 net_mlx5 > > EAL: PCI device 0000:d8:00.4 on NUMA socket 1 > > EAL: probe driver: 15b3:1014 net_mlx5 > > EAL: PCI device 0000:d8:00.5 on NUMA socket 1 > > EAL: probe driver: 15b3:1014 net_mlx5 > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=3D203456, size=3D= 2176, > socket=3D0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > testpmd: create a new mbuf pool : n=3D203456, size=3D= 2176, > socket=3D1 > > testpmd: preferred mempool ops selected: ring_mp_mc > > Configuring Port 0 (socket 1) > > Port 0: 36:FE:F0:D2:90:27 > > Configuring Port 1 (socket 1) > > Port 1: 72:AC:33:BF:0A:FA > > Configuring Port 2 (socket 1) > > Port 2: 1E:8D:81:60:43:E0 > > Configuring Port 3 (socket 1) > > Port 3: C2:3C:EA:94:06:B4 > > Checking link statuses... > > Done > > testpmd> > > > > But when I run my Data Plane app, the result is > > > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[pktio_linux_packet_mmap_setup] > block_size: 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384, > mem_size: 67108864"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pktio_libpio_init] > CTRL: pci devices added: 1, vhost user devices added: 0"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"me= ssage":"[add_pio_pci_devices_from_env_to_config] > pci device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.= 5 > found"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pktio_libpio_init] > CTRL: requesting 1024 MiB of hugepage memory for DPDK"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > USER1: DPDK version: DPDK 20.08.0"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --file-prefix > pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d8:00.5 > --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no-shconf "= } > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Detected 96 lcore(s)"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Detected 2 NUMA nodes"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Selected IOVA mode 'VA'"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs > found for that size"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Probing VFIO support..."} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: cannot open VFIO container, error 2 (No such file or directory)"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: VFIO support could not be initialized"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.5 (socket > 1)"} > > *{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"m= essage":"[pio] > net_mlx5: unable to recognize master/representors on the multiple IB > devices"}* > > *{"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"m= essage":"[pio] > common_mlx5: Failed to load driver =3D net_mlx5."}* > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Requested device 0000:d8:00.5 cannot be used"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > EAL: Bus (pci) probe failed."} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pio] > USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity"= :"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"m= essage":"[pktio_libpio_init] > No network ports could be enabled!"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pktio_init_cpu] > libpio packet module is NOT initialized"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pktio_init_cpu] > pktsock packet module is NOT initialized"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pktio_init_cpu] > linux packet module is initialized"} > > {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity"= :"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"me= ssage":"[pktio_init_cpu] > tap packet module is NOT initialized"} > > > > Any idea on what could be the problem? > > > > Thanks, > > > > Roc=C3=ADo > > > > > > *From:* Asaf Penso > *Sent:* Thursday, January 20, 2022 8:17 AM > *To:* Rocio Dominguez ; NBU-Contact-Thomas > Monjalon (EXTERNAL) > *Cc:* users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > *Subject:* Re: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > Although inbox drivers come with a pre installed DPDK, you can manually > download, compile, install, and work with whatever version you wish. > > > > Let us know the results, and we'll continue from there. > > > > Regards, > > Asaf Penso > ------------------------------ > > *From:* Rocio Dominguez > *Sent:* Monday, January 17, 2022 10:20:58 PM > *To:* Asaf Penso ; NBU-Contact-Thomas Monjalon > (EXTERNAL) > *Cc:* users@dpdk.org ; Matan Azrad ; > Slava Ovsiienko ; Raslan Darawsheh < > rasland@nvidia.com> > *Subject:* RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > > > Hi Asaf, > > Thanks for the prompt answer. > > I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE > repositories the corresponding RPM package for SLES 15 SP2 is not > available, the latest one is DPDK 19.11.10. > > I have installed it but the problem persists. It's probably solved in > 19.11.11. > > There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also= , > not sure if it could be a problem to install it in SLES 15 SP2. I will tr= y > it anyway. > > Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 > apart from using RPM or zipper, any suggestion is appreciated. > > Thanks, > > Roc=C3=ADo > > -----Original Message----- > From: Asaf Penso > Sent: Sunday, January 16, 2022 4:31 PM > To: NBU-Contact-Thomas Monjalon (EXTERNAL) ; Rocio > Dominguez > Cc: users@dpdk.org; Matan Azrad ; Slava Ovsiienko < > viacheslavo@nvidia.com>; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the > multiple IB devices > > Hello Rocio, > IIRC, there was a fix in a recent stable version. > Would you please try taking latest 19.11 LTS and tell whether you still > see the issue? > > Regards, > Asaf Penso > > >-----Original Message----- > >From: Thomas Monjalon > >Sent: Sunday, January 16, 2022 3:24 PM > >To: Rocio Dominguez > >Cc: users@dpdk.org; Matan Azrad ; Slava Ovsiienko > >; Raslan Darawsheh > >Subject: Re: net_mlx5: unable to recognize master/representors on the > >multiple IB devices > > > >+Cc mlx5 experts > > > > > >14/01/2022 11:10, Rocio Dominguez: > >> Hi, > >> > >> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs. > >> > >> I'm using: > >> > >> OS SLES 15 SP2 > >> DPDK 19.11.4 (the official supported version for SLES 15 SP2) > >> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one) > >> Mellanox adapters firmware 12.28.2006 (corresponding to this > >> MLNX_OFED version) kernel 5.3.18-24.34-default > >> > >> > >> This is my SRIOV configuration for DPDK capable PCI slots: > >> > >> { > >> "resourceName": "mlnx_sriov_netdevice", > >> "resourcePrefix": "mellanox.com", > >> "isRdma": true, > >> "selectors": { > >> "vendors": ["15b3"], > >> "devices": ["1014"], > >> "drivers": ["mlx5_core"], > >> "pciAddresses": ["0000:d8:00.2", "0000:d8:00.3", > >> "0000:d8:00.4", > >"0000:d8:00.5"], > >> "isRdma": true > >> } > >> > >> The sriov device plugin starts without problems, the devices are > >> correctly > >allocated: > >> > >> { > >> "cpu": "92", > >> "ephemeral-storage": "419533922385", > >> "hugepages-1Gi": "8Gi", > >> "hugepages-2Mi": "4Gi", > >> "intel.com/intel_sriov_dpdk": "0", > >> "intel.com/sriov_cre": "3", > >> "mellanox.com/mlnx_sriov_netdevice": "4", > >> "mellanox.com/sriov_dp": "0", > >> "memory": "183870336Ki", > >> "pods": "110" > >> } > >> > >> The Mellanox NICs are binded to the kernel driver mlx5_core: > >> > >> pcgwpod009-c04:~ # dpdk-devbind --status > >> > >> Network devices using kernel driver > >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D > >> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe > >> unused=3Dvfio-pci > >> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe > >> unused=3Dvfio-pci > >> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe > >> unused=3Dvfio-pci > >> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe > >> unused=3Dvfio-pci > >> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 > >> drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 > >> drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' > >> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci > >> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' > >> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci > >> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3D > >> drv=3Dixgbevf unused=3Dvfio-pci > >> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' > >> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci > >> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D > >> drv=3Dixgbevf unused=3Dvfio-pci > >> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' > >> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci > >> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' > >> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci > >> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' > >> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci > >> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 > >> drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 > >> drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' > >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' > >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' > >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci > >> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' > >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci > >> > >> The interfaces are up: > >> > >> pcgwpod009-c04:~ # ibdev2netdev -v > >> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 > >QSFP28 > >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up) > >> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 > >QSFP28 > >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up) > >> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 > >QSFP28 > >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up) > >> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 > >QSFP28 > >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up) > >> 0000:d8:00.2 mlx5_4 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D= =3D> > >> enp216s0f2 (Up) > >> 0000:d8:00.3 mlx5_5 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D= =3D> > >> enp216s0f3 (Up) > >> 0000:d8:00.4 mlx5_6 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D= =3D> > >> enp216s0f4 (Up) > >> 0000:d8:00.5 mlx5_7 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D= =3D> > >> enp216s0f5 (Up) pcgwpod009-c04:~ # > >> > >> > >> But when I run my application the Mellanox adapters are probed and I > >obtain the following error: > >> > >> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci > >> (15b3:1014) device: 0000:d8:00.4 (socket 1)"} > >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever > >> i > >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": > >> "6"},"message":"[pio] net_mlx5: unable to recognize > >> master/representors on the multiple IB devices"} > >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever > >> i > >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": > >> "6"},"message":"[pio] common_mlx5: Failed to load driver =3D > >> net_mlx5."} > >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever > >> i > >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": > >> "6"},"message":"[pio] EAL: Requested device 0000:d8:00.4 cannot be > >> used"} > >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever > >> i > >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": > >> "6"},"message":"[pio] EAL: Bus (pci) probe failed."} > >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever > >> i > >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": > >> "6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports, > >> actual 0 ports."} > >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever > >> i > >> ty":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id" > >> :"6"},"message":"[pktio_libpio_init] No network ports could be > >> enabled!"} > >> > >> Could you please help me with this issue? > >> > >> > >> Thanks, > >> > >> Roc=C3=ADo > >> > > > > > > > > > --000000000000d4b96805d732704b Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Was the mlx card on cpu2?

On Fri, Feb 4, 2022, 7:09 PM Asa= f Penso <asafp@nvidia.com> wr= ote:
Great, thanks for the update.
I think the community can benefit if you try with Mellano= x NIC and find an issue that will be resolved.

Regards,
Asaf Penso

From:= Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Friday, February 4, 2022 2:54:20 PM
To: Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjal= on (EXTERNAL) <thomas@monjalon.net>; Ferruh Yigit <ferruh= .yigit@intel.com>; Qi Zhang <qi.z.zhang@intel.com>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvi= dia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Ra= slan Darawsheh <rasland@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices
=C2=A0

Hi Asaf,

=C2=A0

Finally I solved the problem with Intel NICs. I am using Dual NUMA, and = I realized that my application is using cpus from NUMA 0 while I was assign= ing a NIC from NUMA 1. Using a NIC from NUMA 0 solved the problem.

=C2=A0

I don=E2=80=99t know if the problem with Mellanox NICs could be solved i= n the same way. But for the moment, we will use Intel NICs.

=C2=A0

Thanks,

=C2=A0

Roc=C3=ADo

=C2=A0

From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, February 3, 2022 11:50 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com&g= t;; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>= ; Ferruh Yigit <ferruh.yigit@intel.com>; Qi Zhang <qi.z.= zhang@intel.com>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ov= siienko <viacheslavo@nvidia.com>; Raslan Darawsheh <raslan= d@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

Hello Rocio,

=C2=A0

For Intel=E2=80=99s NIC it would be better to take it with @Ferruh Yigit/@Qi Zhang

For Nvidia=E2=80=99s let=E2=80=99s continue together.

=C2=A0

Regards,

Asaf Penso

=C2=A0

From: Rocio Dominguez <rocio.dominguez@ericsson.com<= /a>>
Sent: Thursday, February 3, 2022 12:30 PM
To: Asaf Penso <
asafp@nvidia.com>; NBU-Contact-Thomas Monjal= on (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ov= siienko <viacheslavo@nvidia.com>; Raslan Darawsheh <raslan= d@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

Hi Asaf,

=C2=A0

We have replaced the Mellanox NICs by Intel NICs trying to avoid this pr= oblem, but it=E2=80=99s not working also, this time with the following erro= r:

=C2=A0

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.377+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"8"},"message&q= uot;:"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK=3D0000:d8:02.1 found"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.378+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for DPDK"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.378+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] USER1: DPDK version: DPDK 20.08.0"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.378+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetr= y --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:02.1 --base-virtaddr=3D0x200000000 --legacy-mem --no-shconf "}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.384+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: Detected 96 lcore(s)"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.384+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: Detected 2 NUMA nodes"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.386+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.386+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.387+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: Probing VFIO support..."}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:37.387+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: VFIO support initialized"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:38.358+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL:=C2=A0=C2=A0 using IOMMU type 1 (Type 1)"}

{"version":"0.2.0",&qu= ot;timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane&= quot;,"metadata":{"proc_id":"7"},"messag= e":"[pio] EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:d8:02.1 (socket 1)"}

{"version":"0.2.0",&qu= ot;timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane&= quot;,"metadata":{"proc_id":"7"},"messag= e":"[pio] EAL: Releasing pci mapped resource for 0000:d8:02.1&quo= t;}

{"version":"0.2.0",&qu= ot;timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane&= quot;,"metadata":{"proc_id":"7"},"messag= e":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40000000"}

{"version":"0.2.0",&qu= ot;timestamp":"2022-02-02T14:43:38.704+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane&= quot;,"metadata":{"proc_id":"7"},"messag= e":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40010000"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:38.828+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: Requested device 0000:d8:02.1 cannot be used"}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:38.828+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] EAL: Bus (pci) probe failed."}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:38.891+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."}

{"version":"0.2.0","= timestamp":"2022-02-02T14:43:38.891+00:00","severity&qu= ot;:"error","service_id":"eric-pc-up-data-plane&qu= ot;,"metadata":{"proc_id":"7"},"message&= quot;:"[pktio_libpio_init] No network ports could be enabled!"}

=C2=A0

As using Intel NICs, now I have create the VFs and bind them to vfio-pci= driver

=C2=A0

pcgwpod009-c04:~ # dpdk-devbind --status

=C2=A0

N Network devices using DPDK-compatible drive= r

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D

0000:d8:02.0 'Ethernet Virtual Function 7= 00 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.1 'Ethernet Virtual Function 7= 00 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.2 'Ethernet Virtual Function 7= 00 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.3 'Ethernet Virtual Function 7= 00 Series 154c' drv=3Dvfio-pci unused=3Diavf

=C2=A0

Network devices using kernel driver

=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

0000:18:00.0 'Ethernet Controller 10G X55= 0T 1563' if=3Dem1 drv=3Dixgbe unused=3Dvfio-pci

0000:18:00.1 'Ethernet Controller 10G X55= 0T 1563' if=3Dem2 drv=3Dixgbe unused=3Dvfio-pci

0000:19:00.0 'Ethernet Controller 10G X55= 0T 1563' if=3Dem3 drv=3Dixgbe unused=3Dvfio-pci

0000:19:00.1 'Ethernet Controller 10G X55= 0T 1563' if=3Dem4 drv=3Dixgbe unused=3Dvfio-pci

0000:3b:00.0 'Ethernet Controller XXV710 = for 25GbE SFP28 158b' if=3Dp1p1 drv=3Di40e unused=3Dvfio-pci

0000:3b:00.1 'Ethernet Controller XXV710 = for 25GbE SFP28 158b' if=3Dp1p2 drv=3Di40e unused=3Dvfio-pci

0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+= Network Connection 10fb' if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci

0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+= Network Connection 10fb' if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci

0000:5e:10.0 '82599 Ethernet Controller V= irtual Function 10ed' if=3Dp3p1_0 drv=3Dixgbevf unused=3Dvfio-pci

0000:5e:10.2 '82599 Ethernet Controller V= irtual Function 10ed' if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci

0000:5e:10.4 '82599 Ethernet Controller V= irtual Function 10ed' if=3D drv=3Dixgbevf unused=3Dvfio-pci

0000:5e:10.6 '82599 Ethernet Controller V= irtual Function 10ed' if=3D drv=3Dixgbevf unused=3Dvfio-pci

0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+= Network Connection 10fb' if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci

0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+= Network Connection 10fb' if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci

0000:d8:00.0 'Ethernet Controller XXV710 = for 25GbE SFP28 158b' if=3Dp8p1 drv=3Di40e unused=3Dvfio-pci

0000:d8:00.1 'Ethernet Controller XXV710 = for 25GbE SFP28 158b' if=3Dp8p2 drv=3Di40e unused=3Dvfio-pci

=C2=A0

The interfaces are up:

=C2=A0

pcgwpod009-c04:~ # ip link show dev p8p1

290: p8p1: <BROADCAST,MULTICAST,UP,LOWER_U= P> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

=C2=A0=C2=A0=C2=A0 link/ether 40:a6:b7:0d:98:= b0 brd ff:ff:ff:ff:ff:ff

=C2=A0=C2=A0=C2=A0 vf 0=C2=A0=C2=A0=C2=A0=C2= =A0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, = link-state auto, trust off

=C2=A0=C2=A0=C2=A0 vf 1=C2=A0=C2=A0=C2=A0=C2= =A0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, = link-state auto, trust off

=C2=A0=C2=A0=C2=A0 vf 2=C2=A0=C2=A0=C2=A0=C2= =A0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, = link-state auto, trust off

=C2=A0=C2=A0=C2=A0 vf 3=C2=A0=C2=A0=C2=A0=C2= =A0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof checking on, = link-state auto, trust off

pcgwpod009-c04:~ #

=C2=A0

The testpmd is working:

=C2=A0

pcgwpod009-c04:~ # testpmd -l 8-15 -n 4 -w d8= :02.0 -w d8:02.1 -w d8:02.2 -w d8:02.3 -- --rxq=3D2 --txq=3D2 -i

EAL: Detected 96 lcore(s)

EAL: Detected 2 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/m= p_socket

EAL: Selected IOVA mode 'VA'

EAL: 2048 hugepages of size 2097152 reserved,= but no mounted hugetlbfs found for that size

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:d8:02.0 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 8086:154c net_= i40e_vf

EAL:=C2=A0=C2=A0 using IOMMU type 1 (Type 1)<= /p>

EAL: PCI device 0000:d8:02.1 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 8086:154c net_= i40e_vf

EAL: PCI device 0000:d8:02.2 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 8086:154c net_= i40e_vf

EAL: PCI device 0000:d8:02.3 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 8086:154c net_= i40e_vf

Interactive-mode selected

testpmd: create a new mbuf pool <mbuf_pool= _socket_0>: n=3D203456, size=3D2176, socket=3D0

testpmd: preferred mempool ops selected: ring= _mp_mc

testpmd: create a new mbuf pool <mbuf_pool= _socket_1>: n=3D203456, size=3D2176, socket=3D1

testpmd: preferred mempool ops selected: ring= _mp_mc

Configuring Port 0 (socket 1)

Port 0: FE:72:DB:BE:05:EF

Configuring Port 1 (socket 1)

Port 1: 5E:C5:3E:86:1A:84

Configuring Port 2 (socket 1)

Port 2: 42:F0:5D:B0:1F:B3

Configuring Port 3 (socket 1)

Port 3: 46:00:42:2F:A2:DE

Checking link statuses...

Done

testpmd>

=C2=A0

Any idea on what could be causing the error this time?

=C2=A0

Thanks,

=C2=A0

Roc=C3=ADo

=C2=A0

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 6:02 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com&g= t;; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>=
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ov= siienko <viacheslavo@nvidia.com>; Raslan Darawsheh <raslan= d@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

We'll need to check, but how do you want to proceed?

You either need 19.11 LTS or 20.11 LTS to work properly.

=C2=A0

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 31, 2022 2:01:43 PM
To: Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjal= on (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvi= dia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

Hi Asaf,

=C2=A0

Yes, it seems that DPDK version 20.08 code is built-in with the VNF I=E2= =80=99m deploying, so it is always using this version, which apparently doe= sn=E2=80=99t have the patch that overrides this error.

=C2=A0

I think the patch is the following:

https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-op= hirmu@mellanox.com/

=C2=A0

and the code part that solves the error is:

+=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (mlx5_class_get(pci_dev->device.dev= args) !=3D MLX5_CLASS_NET) {

+=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 DRV_LOG(DEBUG, "Skip probing - should be probed by other = mlx5"

+=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 " driver.= ");

+=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0 return 1;

+=C2= =A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }

Could you please confirm?

=C2=A0

Thanks,

=C2=A0

Roc=C3=ADo

=C2=A0

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 12:49 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com&g= t;; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>=
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ov= siienko <viacheslavo@nvidia.com>; Raslan Darawsheh <raslan= d@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

I see two differences below.

First, in testpmd the version is 19.11.11, and in your application, it= =E2=80=99s 20.08. See this print:

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.610+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] USER1: DPDK version: DPDK 20.08.0"}

=C2=A0

Second, in your application, I see the VFIO driver is not started proper= ly:

20T19:19:16.637+00:00","severity":"info","= service_id":"eric-pc-up-data-plane","metadata":{&q= uot;proc_id":"6"},"message":"[pio] EAL:=C2=A0= =C2=A0 cannot open VFIO container, error 2 (No such file or directory)"= ;}

=C2=A0

Regards,

Asaf Penso

=C2=A0

From: Rocio Dominguez <rocio.dominguez@ericsson.com<= /a>>
Sent: Thursday, January 20, 2022 9:49 PM
To: Asaf Penso <
asafp@nvidia.com>; NBU-Contact-Thomas Monjal= on (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ov= siienko <viacheslavo@nvidia.com>; Raslan Darawsheh <raslan= d@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

Hi Asaf,

=C2=A0

I have manually compile and install the DPDK 19.11.11.

=C2=A0

Executing testpmd in the Mellanox NICs VFs where I want to run my app gi= ves this result:

=C2=A0

pcgwpod009-c04:~/dpdk-stable-19.11.11 # ./x86= _64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 -w d8:0= 0.4 -w d8:00.5 -- --rxq=3D2 --txq=3D2 -i

EAL: Detected 96 lcore(s)

EAL: Detected 2 NUMA nodes

EAL: Multi-process socket /var/run/dpdk/rte/m= p_socket

EAL: Selected IOVA mode 'VA'

EAL: 2048 hugepages of size 2097152 reserved,= but no mounted hugetlbfs found for that size

EAL: Probing VFIO support...

EAL: VFIO support initialized

EAL: PCI device 0000:d8:00.2 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 15b3:1014 net_= mlx5

EAL: PCI device 0000:d8:00.3 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 15b3:1014 net_= mlx5

EAL: PCI device 0000:d8:00.4 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 15b3:1014 net_= mlx5

EAL: PCI device 0000:d8:00.5 on NUMA socket 1=

EAL:=C2=A0=C2=A0 probe driver: 15b3:1014 net_= mlx5

Interactive-mode selected

testpmd: create a new mbuf pool <mbuf_pool= _socket_0>: n=3D203456, size=3D2176, socket=3D0

testpmd: preferred mempool ops selected: ring= _mp_mc

testpmd: create a new mbuf pool <mbuf_pool= _socket_1>: n=3D203456, size=3D2176, socket=3D1

testpmd: preferred mempool ops selected: ring= _mp_mc

Configuring Port 0 (socket 1)

Port 0: 36:FE:F0:D2:90:27

Configuring Port 1 (socket 1)

Port 1: 72:AC:33:BF:0A:FA

Configuring Port 2 (socket 1)

Port 2: 1E:8D:81:60:43:E0

Configuring Port 3 (socket 1)

Port 3: C2:3C:EA:94:06:B4

Checking link statuses...

Done

testpmd>

=C2=A0

But when I run my Data Plane app, the result is

=C2=A0

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.609+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[pktio_linux_packet_mmap_setup] block_size: 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384, mem_size: 67108864"}<= /p>

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.610+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pktio_libpio_init] CTRL: pci devices added: 1, vhost user devices added: 0"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.610+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"7"},"message&q= uot;:"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"= ;}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.610+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for DPDK"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.610+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] USER1: DPDK version: DPDK 20.08.0"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.610+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetr= y --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no= -shconf "}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.618+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Detected 96 lcore(s)"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.618+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Detected 2 NUMA nodes"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.636+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.637+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs found for that size"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.637+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Probing VFIO support..."}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.637+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL:=C2=A0=C2=A0 cannot open VFIO container, error 2 (No such file or directory)"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:16.637+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: VFIO support could not be initialized"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.567+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.5 (socket 1)"}

{"version":"0.2.0",&qu= ot;timestamp":"2022-01-20T19:19:17.569+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane&= quot;,"metadata":{"proc_id":"6"},"messag= e":"[pio] net_mlx5: unable to recognize master/representors on the multiple IB devices"}

{"version":"0.2.0",&qu= ot;timestamp":"2022-01-20T19:19:17.569+00:00","severity= ":"info","service_id":"eric-pc-up-data-plane&= quot;,"metadata":{"proc_id":"6"},"messag= e":"[pio] common_mlx5: Failed to load driver =3D net_mlx5."}=

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.569+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Requested device 0000:d8:00.5 cannot be used"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.569+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] EAL: Bus (pci) probe failed."}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.631+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports."}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.631+00:00","severity&qu= ot;:"error","service_id":"eric-pc-up-data-plane&qu= ot;,"metadata":{"proc_id":"6"},"message&= quot;:"[pktio_libpio_init] No network ports could be enabled!"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.631+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pktio_init_cpu] libpio packet module is NOT initialized"}<= /p>

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.631+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pktio_init_cpu] pktsock packet module is NOT initialized"}=

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.631+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pktio_init_cpu] linux packet module is initialized"}

{"version":"0.2.0","= timestamp":"2022-01-20T19:19:17.631+00:00","severity&qu= ot;:"info","service_id":"eric-pc-up-data-plane&quo= t;,"metadata":{"proc_id":"6"},"message&q= uot;:"[pktio_init_cpu] tap packet module is NOT initialized"}

=C2=A0

Any idea on what could be the problem?

=C2=A0

Thanks,

=C2=A0

Roc=C3=ADo

=C2=A0

=C2=A0

From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, January 20, 2022 8:17 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com&g= t;; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>=
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ov= siienko <viacheslavo@nvidia.com>; Raslan Darawsheh <raslan= d@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

Although inbox drivers come with a pre installed DPDK, you can manually = download, compile, install, and work with whatever version you wish.

=C2=A0

Let us know the results, and we'll continue from there.

=C2=A0

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 17, 2022 10:20:58 PM
To: Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjal= on (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvi= dia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

=C2=A0

Hi Asaf,

Thanks for the prompt answer.

I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10.

I have installed it but the problem persists. It's probably solved in 1= 9.11.11.

There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway.

Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated.

Thanks,

Roc=C3=ADo

-----Original Message-----
From: Asaf Penso <asafp@nvidia.com>
Sent: Sunday, January 16, 2022 4:31 PM
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>= ; Rocio Dominguez <rocio.dominguez@ericsson.com>
Cc: = users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko= <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidi= a.com>
Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices

Hello Rocio,
IIRC, there was a fix in a recent stable version.
Would you please try taking latest 19.11 LTS and tell whether you still see= the issue?

Regards,
Asaf Penso

>-----Original Message-----
>From: Thomas Monjalon <thomas@monjalon.net>
>Sent: Sunday, January 16, 2022 3:24 PM
>To: Rocio Dominguez <rocio.dominguez@ericsson.com><= br> >Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsii= enko
><viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nv= idia.com>
>Subject: Re: net_mlx5: unable to recognize master/representors on the <= br> >multiple IB devices
>
>+Cc mlx5 experts
>
>
>14/01/2022 11:10, Rocio Dominguez:
>> Hi,
>>
>> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.=
>>
>> I'm using:
>>
>> OS SLES 15 SP2
>> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
>> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
>> Mellanox adapters firmware 12.28.2006 (corresponding to this
>> MLNX_OFED version) kernel 5.3.18-24.34-default
>>
>>
>> This is my SRIOV configuration for DPDK capable PCI slots:
>>
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0 {
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "resourceName": "mlnx_sriov_n= etdevice",
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "resourcePrefix": "mellanox.com= ",
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "isRdma": true,
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "selectors": {
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "vendors":= ["15b3"],
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "devices":= ["1014"],
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "drivers":= ["mlx5_core"],
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "pciAddresses&q= uot;: ["0000:d8:00.2", "0000:d8:00.3",
>> "0000:d8:00.4",
>"0000:d8:00.5"],
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 "isRdma": = true
>>=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0= =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>>
>> The sriov device plugin starts without problems, the devices are <= br> >> correctly
>allocated:
>>
>> {
>>=C2=A0=C2=A0 "cpu": "92",
>>=C2=A0=C2=A0 "ephemeral-storage": "419533922385"= ;,
>>=C2=A0=C2=A0 "hugepages-1Gi": "8Gi",
>>=C2=A0=C2=A0 "hugepages-2Mi": "4Gi",
>>=C2=A0=C2=A0 "intel.com/intel_sriov_dpdk": &q= uot;0",
>>=C2=A0=C2=A0 "intel.com/sriov_cre": "3", >>=C2=A0=C2=A0 "mellanox.com/mlnx_sriov_netdevice= ": "4",
>>=C2=A0=C2=A0 "mellanox.com/sriov_dp": "0&quo= t;,
>>=C2=A0=C2=A0 "memory": "183870336Ki",
>>=C2=A0=C2=A0 "pods": "110"
>> }
>>
>> The Mellanox NICs are binded to the kernel driver mlx5_core:
>>
>> pcgwpod009-c04:~ # dpdk-devbind --status
>>
>> Network devices using kernel driver
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1= drv=3Dixgbe
>> unused=3Dvfio-pci
>> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2= drv=3Dixgbe
>> unused=3Dvfio-pci
>> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3= drv=3Dixgbe
>> unused=3Dvfio-pci
>> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4= drv=3Dixgbe
>> unused=3Dvfio-pci
>> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59= s0f0
>> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59= s0f1
>> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 1= 0fb'
>> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 1= 0fb'
>> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed&= #39; if=3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed&= #39;
>> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed&= #39; if=3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed&= #39;
>> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 1= 0fb'
>> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 1= 0fb'
>> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp21= 6s0f0
>> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp21= 6s0f1
>> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 101= 4'
>> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 101= 4'
>> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 101= 4'
>> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 101= 4'
>> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci
>>
>> The interfaces are up:
>>
>> pcgwpod009-c04:~ # ibdev2netdev -v
>> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up)
>> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up)
>> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up)
>> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up)
>> 0000:d8:00.2 mlx5_4 (MT4116 - NA)=C2=A0 fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f2 (Up)
>> 0000:d8:00.3 mlx5_5 (MT4116 - NA)=C2=A0 fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f3 (Up)
>> 0000:d8:00.4 mlx5_6 (MT4116 - NA)=C2=A0 fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f4 (Up)
>> 0000:d8:00.5 mlx5_7 (MT4116 - NA)=C2=A0 fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f5 (Up) pcgwpod009-c04:~ #
>>
>>
>> But when I run my application the Mellanox adapters are probed and= I
>obtain the following error:
>>
>> {"proc_id":"6"},"message":"[pio= ] EAL: Probe PCI driver: mlx5_pci
>> (15b3:1014) device: 0000:d8:00.4 (socket 1)"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] net_mlx5: unable to= recognize
>> master/representors on the multiple IB devices"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] common_mlx5: Failed= to load driver =3D
>> net_mlx5."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Requested devi= ce 0000:d8:00.4 cannot be
>> used"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Bus (pci) prob= e failed."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] USER1: ports init f= ail in DPDK, expect 1 ports,
>> actual 0 ports."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"error","service_id":"eric-pc-up= -data-plane","metadata":{"proc_id"
>> :"6"},"message":"[pktio_libpio_init] No n= etwork ports could be
>> enabled!"}
>>
>> Could you please help me with this issue?
>>
>>
>> Thanks,
>>
>> Roc=C3=ADo
>>
>
>
>
>

--000000000000d4b96805d732704b--