From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 834DFA034E for ; Mon, 31 Jan 2022 18:02:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C821411F3; Mon, 31 Jan 2022 18:02:32 +0100 (CET) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2073.outbound.protection.outlook.com [40.107.237.73]) by mails.dpdk.org (Postfix) with ESMTP id 906C340042 for ; Mon, 31 Jan 2022 18:02:30 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=andkP2F2unjBRMaFh0ChJXFGI5JbKkHtAUFZ2AG47BPNYw2108TMN2YQLklwFlFVOuetYVXGQV36HJ56ggTeDm2/tGqU+xVBnf4zaL18aPW6CFmVnmZae8Xlh3YdGefoMBcEToHbS2MtLE/++SJdJpCq9KO6Zgigsko7XftJ7gdXi8brIYKUROetLVhKnrwAi7Ji/cHhQV8SXGKzsZeEMECYAxSuArFhOifMNrxidR5Jdvf+dQL0X7rJ+R6D0L3+GmuJU2Gu7JM/AT9kM8hkCJnDMxgZ1c2OXvfG+YIzE8rtc3EAP/qx4L0eb37sXKxL+aq2/1OODxzVo3rHwbVgRw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YyFMNQIfMTqIRPZqmXswlhtlSt0pgsyVKqxtC50k81s=; b=apTlBlfoWQ85IM6cmva303BI6IIOKpTGJ1QN7rVRmcqXC4PplnLtv+267BthFiB1M5T2DRi6RtsxzgJRIQ90uwCjrDU+oWTvV9IEYNlSUUiRfgQtGNnZgFZ8VquT88TJsqOVEQEacTAmu5njgbOC3KWyIE9DJkkuMA1zRPzaHXwWs/x+7BiirBGCDj20l7286DHI7aR+DgLk0udvhg88B1H4vj0VzH1T24AIWlKs1ne64UKP1OxUVaye+S0p3t8olwtYdWZl3Bfv0ybi7IAoYEKCK4qREgASKag1XNDTB03vHqkTnVkdWhtcH9iozNqfKkT7uwvqOhGiWoG/ubqkpw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YyFMNQIfMTqIRPZqmXswlhtlSt0pgsyVKqxtC50k81s=; b=ICg8Pf8EXqFkPRQ0EiL6Oi4BJcyb+gEzATsDOavBA5FgnSgC18zeBgBJYZ+1CmnMp7FJjsUbUP32lIewmBWulMJdTiojDeElr6vgIuQlBzRq3ctDw0NpVYRO5USA7z9Poi4USYADPRjnvH5uUoruaxRkLmGhJQLQsdKQpiaTR5ijgWXmBBmOZPeKkdOhm/d2xFM3ObomzyG9yeDivSyJL8fDxPDw4Kalp19mu/02/ScaW766RxKm5dau1qfmwz/Ol3tH24o8YA1rtvyZuYb65ohwl5TLOQOCS7EEHu+Wd6iuRS+fszA6P17IkDW/2/jPTGVeFb8ifionbyLIpihgAA== Received: from DM5PR1201MB2555.namprd12.prod.outlook.com (2603:10b6:3:ea::14) by CH2PR12MB5562.namprd12.prod.outlook.com (2603:10b6:610:67::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4930.15; Mon, 31 Jan 2022 17:02:28 +0000 Received: from DM5PR1201MB2555.namprd12.prod.outlook.com ([fe80::e4c8:ad94:bd2f:3fdd]) by DM5PR1201MB2555.namprd12.prod.outlook.com ([fe80::e4c8:ad94:bd2f:3fdd%5]) with mapi id 15.20.4930.022; Mon, 31 Jan 2022 17:02:28 +0000 From: Asaf Penso To: Rocio Dominguez , "NBU-Contact-Thomas Monjalon (EXTERNAL)" CC: "users@dpdk.org" , Matan Azrad , Slava Ovsiienko , Raslan Darawsheh Subject: Re: net_mlx5: unable to recognize master/representors on the multiple IB devices Thread-Topic: net_mlx5: unable to recognize master/representors on the multiple IB devices Thread-Index: AdgJLQTmyQJM7lgBQqSyceNdxLJR4QBr0gyAAARnZ6AAI+HbQACUBFcNABidwCACGgPv4AAAJe0wAArmI28= Date: Mon, 31 Jan 2022 17:02:28 +0000 Message-ID: References: <37327910.J2Yia2DhmK@thomas> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 6c92ee35-ef33-441e-a1e2-08d9e4db7665 x-ms-traffictypediagnostic: CH2PR12MB5562:EE_ x-ld-processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:3631; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: ffTJBAbxCVfQS+s/V4Ytb4XQH37OGs6icGsPHNHO2bP1m4olSTO/QnAzLgO/xtz5NbPFl90o7Wil4zwGhLCBlx4XpirZOHdytAEcmkFAiBTrr1RRyHu2DL5MO/srIY7MKmBT+KEPdEUrBB+v6yY8X1AV//xUddjls0llXCONEOMleaUmRD4FLUi6FgnxoLScuwfpnp2A9Bld4Jvy4eqXKdMATh1FqJ+gWa11p1pUZlH8nkjcRKh5rXJ524knhsznlaaFQIopL4XS7JOBXYGycw2WHyOS2tROlSGpIAGmP/qDatzSgrEDVvF+jumh8ETrhDTpUTp/GiRfrN6Mpte9HMSG578N9vByMV1X1120xlyYfFlH6Ds1S5GF5Ts1T4Q5nYiaM04gy9o/+E1qnDSA7jt3BnNCIxhBnFByBEeEpQy8t/MIAvSgJOLmsLBDyjaQyOHCC94bLF+m8Ge3fyQ3iE694CLk0mrPaphuRHJKEx+cqC0Ax7v8ydzWDvxHj6H8jqVmRLXHLuAD9jkcSStdZNKW0/zlCwDWimc/6pNbeWUCbaJORb2OQuxUVmNj81ALAWZNJIUMpvonJQ0wE7PpWn7Y5msSTLxtbzwD1x84SExwHxJ9Ptu7/ntWGoeu1eBQej+kFzNgNLoAQU0GKtXyuJTLF8fSgFYu2qdY8Rk30TXnCmykiEwVLgSQnw8NSP+N9Sg8QKrSwU+T/TSF5IK5oGdpzKqd6N9QWjFDTzGtJQXPTic5FQIzU0+4vLqRDKaRaw8c5SCKMNgPseBIeNUZLhdC6hBUixlbKKMsCAP5Ryo= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM5PR1201MB2555.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(91956017)(66476007)(64756008)(71200400001)(66446008)(8676002)(8936002)(508600001)(66946007)(76116006)(9686003)(4326008)(66556008)(966005)(52536014)(54906003)(110136005)(19627235002)(316002)(2906002)(186003)(26005)(83380400001)(107886003)(30864003)(66574015)(86362001)(33656002)(166002)(55016003)(38070700005)(53546011)(6506007)(7696005)(5660300002)(38100700002)(122000001)(20210929001); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?Windows-1252?Q?Pw9llQCoEJGS7u8cAnd5APNcLReBawCLKmO9q+bFv1WF57xY6y/mUIp3?= =?Windows-1252?Q?573fcp3iENqHLNOshRhND/eSQxd+U9UFa+wtmPR+nBwtoqfspssBL0oU?= =?Windows-1252?Q?wFkB0+1677AFs/0g/6GSF8c2T5CAoN0GkyT91mbx4yYReS5Wi6kK6agD?= =?Windows-1252?Q?zWwOb9pyAndjZxtMHLSrphXu4783oMDMZtEbUgtwvieyzj+K4D8lQxHV?= =?Windows-1252?Q?BlmqJy7ru3vhvDKsf7/Xw6PtGxboxJsrMAYUm8ki4chCWU28ER6n+fJG?= =?Windows-1252?Q?+/QFXLxgqqCxG7qG4yA+wQj4siiOFLiz3R2lzi367MwaqcHnRYw/Kvgm?= =?Windows-1252?Q?CxQb1fMkSiaWsNfzgkez8w4fxHk/bqfusPuEK8forghWklzDmt3tI8yZ?= =?Windows-1252?Q?9jkbvET956+uCbMNZ/4kohPXPbVoYbFM+Lcovw31S5iFFdbfUkQE56vw?= =?Windows-1252?Q?rCjJ6MafjY4VPOXXPr+DKNqijnrmfeJrTBpSeI8/79/wDJMjUE6+JFmA?= =?Windows-1252?Q?V0QK/Pv86IGTJmMaCUvEP03EspRUnz5kwflwqSAoY8p83pqlkbcyrEQd?= =?Windows-1252?Q?l8uVUjpLFN5yQOS6psm/AXFghcEWmpslNV2aCRsGJvVyT8pnpcw8W6u5?= =?Windows-1252?Q?CXPL9/M8Mia40ZkCpEXVFAVcuaWVqPcykem1QIp6Pge0at1Ag9blPv5w?= =?Windows-1252?Q?PmybJI04euc9Yg0Ns9N4sqPfE2ICCC9pSXcjXdhLXupLTggXHwJ3KfDp?= =?Windows-1252?Q?AEsm5Exxl27NYvh4bWvrqe5L+RURj0tc68+tI8tNZbzcXKyD3S4BWOY9?= =?Windows-1252?Q?OUibXxh0zReVhpaWc9UDhKRAJ2XzKTIN5nxo80mlzPLn1x1JbOxr22PV?= =?Windows-1252?Q?7tHOtUw/epGtG8DqH6u3Nw723hjv1NkA7hpSY7OGMm10jZV0shaynIlW?= =?Windows-1252?Q?mFQvmwocQ3kuIR3yOHB+UMxg9drncgW7FijK1eDXI/d2wJ1EMdrNJEp0?= =?Windows-1252?Q?t0gYlZDKSLUO8+LrNaTJmfwROCi9oMroHEBIsn2J1kTY7cBocdA7v+0c?= =?Windows-1252?Q?zCJ22+PFrIovhAjs5HuqET8lDwSABm/JJTdy0Dzuf40BSjzKDauN+mKK?= =?Windows-1252?Q?m9HZ6cOPVhHwMsIvBtmlxcjLo+rN5rkpw0Li1ftZrmtJ17Kdn2AopLCu?= =?Windows-1252?Q?srCvGWmhZPewnIzvBW7ZwcbOR0kJFvYrgz4hdK/Ocd/pclFh1QuR1dym?= =?Windows-1252?Q?dmeB45mEEE/oPcUb8nrxSwRvdJNIHz2bUMoE9UA297ek+jc0OH1TZA7c?= =?Windows-1252?Q?bvPnBpAHuEjDY4zuDs3Gu7Rl7LxVMXcTtyhFYGIEfaKNB38lmza5FlBp?= =?Windows-1252?Q?MLPvclrlyyqCQwkclJNc5ZVi7z6QoJ2OVYM2VtI5pvB0XgTtSd9VQgfp?= =?Windows-1252?Q?HMP5I179SoBbbOUn3L+xel+BN57UBEb6uwXHmZOJTeNg1i+IzbvV2tZe?= =?Windows-1252?Q?qRU8UkUsYb//jc+lQjA5N2k9fm617/bTcYtBYGihdgPEi7Cw4ojM/jmR?= =?Windows-1252?Q?FbHWLirTnvIT8Pk304yFLEffepkcgOhZIWgcJtTVzO9ObHAIlJH/hR2n?= =?Windows-1252?Q?zKzQMsJxX4P8ScouIqyFWDstuz4GICld9MK39DLxe7awBSa0utzM22zQ?= =?Windows-1252?Q?mQSvYmeYbToZ3mlF/AX9/mURmBgHisvn1hmfz/HKIy3hLnxwqooxAgGg?= =?Windows-1252?Q?jEi5jxOEtpY9y9eK2kSuMAArMoUxBGKCEe3WZbgj?= Content-Type: multipart/alternative; boundary="_000_DM5PR1201MB2555C04807E5231338B5C1E3CD259DM5PR1201MB2555_" MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM5PR1201MB2555.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 6c92ee35-ef33-441e-a1e2-08d9e4db7665 X-MS-Exchange-CrossTenant-originalarrivaltime: 31 Jan 2022 17:02:28.3260 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: rVXXcnXrcWinxmbuTKFxZ9EsfjcFOY1K6J7Ugr5tafX/FTNhUmHZP56nbMrYBWo6VAzdXNJki/DKPaTTP2ryNQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR12MB5562 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_DM5PR1201MB2555C04807E5231338B5C1E3CD259DM5PR1201MB2555_ Content-Type: text/plain; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable We'll need to check, but how do you want to proceed? You either need 19.11 LTS or 20.11 LTS to work properly. Regards, Asaf Penso ________________________________ From: Rocio Dominguez Sent: Monday, January 31, 2022 2:01:43 PM To: Asaf Penso ; NBU-Contact-Thomas Monjalon (EXTERNAL) <= thomas@monjalon.net> Cc: users@dpdk.org ; Matan Azrad ; Slava = Ovsiienko ; Raslan Darawsheh Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, Yes, it seems that DPDK version 20.08 code is built-in with the VNF I=92m d= eploying, so it is always using this version, which apparently doesn=92t ha= ve the patch that overrides this error. I think the patch is the following: https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-ophirmu@m= ellanox.com/ and the code part that solves the error is: + if (mlx5_class_get(pci_dev->device.devargs) !=3D MLX5_CLASS_NET) { + DRV_LOG(DEBUG, "Skip probing - should be probed by other m= lx5" + " driver."); + return 1; + } Could you please confirm? Thanks, Roc=EDo From: Asaf Penso Sent: Monday, January 31, 2022 12:49 PM To: Rocio Dominguez ; NBU-Contact-Thomas Monj= alon (EXTERNAL) Cc: users@dpdk.org; Matan Azrad ; Slava Ovsiienko ; Raslan Darawsheh Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices I see two differences below. First, in testpmd the version is 19.11.11, and in your application, it=92s = 20.08. See this print: {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} Second, in your application, I see the VFIO driver is not started properly: 20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plan= e","metadata":{"proc_id":"6"},"message":"[pio] EAL: cannot open VFIO cont= ainer, error 2 (No such file or directory)"} Regards, Asaf Penso From: Rocio Dominguez > Sent: Thursday, January 20, 2022 9:49 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, I have manually compile and install the DPDK 19.11.11. Executing testpmd in the Mellanox NICs VFs where I want to run my app gives= this result: pcgwpod009-c04:~/dpdk-stable-19.11.11 # ./x86_64-native-linux-gcc/app/testp= md -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=3D2 --= txq=3D2 -i EAL: Detected 96 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs foun= d for that size EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:d8:00.2 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.3 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.4 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.5 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 Interactive-mode selected testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 1) Port 0: 36:FE:F0:D2:90:27 Configuring Port 1 (socket 1) Port 1: 72:AC:33:BF:0A:FA Configuring Port 2 (socket 1) Port 2: 1E:8D:81:60:43:E0 Configuring Port 3 (socket 1) Port 3: C2:3C:EA:94:06:B4 Checking link statuses... Done testpmd> But when I run my Data Plane app, the result is {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pktio_linux_packet_mmap_setup] block_size: 67108864, frame_size: 409= 6, block_nr: 1, frame_nr: 16384, mem_size: 67108864"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_libpio_init] CTRL: pci devices added: 1, vhost user devices ad= ded: 0"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_ME= LLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for = DPDK"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --f= ile-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d= 8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no-shc= onf "} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Detected 96 lcore(s)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Detected 2 NUMA nodes"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Selected IOVA mode 'VA'"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hu= getlbfs found for that size"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Probing VFIO support..."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: cannot open VFIO container, error 2 (No such file or dir= ectory)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: VFIO support could not be initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.= 5 (socket 1)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] net_mlx5: unable to recognize master/representors on the multip= le IB devices"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] common_mlx5: Failed to load driver =3D net_mlx5."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Requested device 0000:d8:00.5 cannot be used"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Bus (pci) probe failed."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports.= "} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mes= sage":"[pktio_libpio_init] No network ports could be enabled!"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] libpio packet module is NOT initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] pktsock packet module is NOT initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] linux packet module is initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] tap packet module is NOT initialized"} Any idea on what could be the problem? Thanks, Roc=EDo From: Asaf Penso > Sent: Thursday, January 20, 2022 8:17 AM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: Re: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Although inbox drivers come with a pre installed DPDK, you can manually dow= nload, compile, install, and work with whatever version you wish. Let us know the results, and we'll continue from there. Regards, Asaf Penso ________________________________ From: Rocio Dominguez > Sent: Monday, January 17, 2022 10:20:58 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org >; Matan Azrad >; Slava Ovsi= ienko >; Raslan Daraw= sheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, Thanks for the prompt answer. I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10. I have installed it but the problem persists. It's probably solved in 19.11= .11. There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway. Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated. Thanks, Roc=EDo -----Original Message----- From: Asaf Penso > Sent: Sunday, January 16, 2022 4:31 PM To: NBU-Contact-Thomas Monjalon (EXTERNAL) >; Rocio Dominguez > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hello Rocio, IIRC, there was a fix in a recent stable version. Would you please try taking latest 19.11 LTS and tell whether you still see= the issue? Regards, Asaf Penso >-----Original Message----- >From: Thomas Monjalon > >Sent: Sunday, January 16, 2022 3:24 PM >To: Rocio Dominguez > >Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >>; Raslan Darawsheh = > >Subject: Re: net_mlx5: unable to recognize master/representors on the >multiple IB devices > >+Cc mlx5 experts > > >14/01/2022 11:10, Rocio Dominguez: >> Hi, >> >> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs. >> >> I'm using: >> >> OS SLES 15 SP2 >> DPDK 19.11.4 (the official supported version for SLES 15 SP2) >> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one) >> Mellanox adapters firmware 12.28.2006 (corresponding to this >> MLNX_OFED version) kernel 5.3.18-24.34-default >> >> >> This is my SRIOV configuration for DPDK capable PCI slots: >> >> { >> "resourceName": "mlnx_sriov_netdevice", >> "resourcePrefix": "mellanox.com", >> "isRdma": true, >> "selectors": { >> "vendors": ["15b3"], >> "devices": ["1014"], >> "drivers": ["mlx5_core"], >> "pciAddresses": ["0000:d8:00.2", "0000:d8:00.3", >> "0000:d8:00.4", >"0000:d8:00.5"], >> "isRdma": true >> } >> >> The sriov device plugin starts without problems, the devices are >> correctly >allocated: >> >> { >> "cpu": "92", >> "ephemeral-storage": "419533922385", >> "hugepages-1Gi": "8Gi", >> "hugepages-2Mi": "4Gi", >> "intel.com/intel_sriov_dpdk": "0", >> "intel.com/sriov_cre": "3", >> "mellanox.com/mlnx_sriov_netdevice": "4", >> "mellanox.com/sriov_dp": "0", >> "memory": "183870336Ki", >> "pods": "110" >> } >> >> The Mellanox NICs are binded to the kernel driver mlx5_core: >> >> pcgwpod009-c04:~ # dpdk-devbind --status >> >> Network devices using kernel driver >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci >> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci >> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3D >> drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' >> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D >> drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' >> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci >> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci >> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci >> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci >> >> The interfaces are up: >> >> pcgwpod009-c04:~ # ibdev2netdev -v >> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up) >> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up) >> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up) >> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up) >> 0000:d8:00.2 mlx5_4 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f2 (Up) >> 0000:d8:00.3 mlx5_5 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f3 (Up) >> 0000:d8:00.4 mlx5_6 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f4 (Up) >> 0000:d8:00.5 mlx5_7 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f5 (Up) pcgwpod009-c04:~ # >> >> >> But when I run my application the Mellanox adapters are probed and I >obtain the following error: >> >> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci >> (15b3:1014) device: 0000:d8:00.4 (socket 1)"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] net_mlx5: unable to recognize >> master/representors on the multiple IB devices"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] common_mlx5: Failed to load driver =3D >> net_mlx5."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] EAL: Requested device 0000:d8:00.4 cannot be >> used"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] EAL: Bus (pci) probe failed."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports, >> actual 0 ports."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever >> i >> ty":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id" >> :"6"},"message":"[pktio_libpio_init] No network ports could be >> enabled!"} >> >> Could you please help me with this issue? >> >> >> Thanks, >> >> Roc=EDo >> > > > > --_000_DM5PR1201MB2555C04807E5231338B5C1E3CD259DM5PR1201MB2555_ Content-Type: text/html; charset="Windows-1252" Content-Transfer-Encoding: quoted-printable
We'll need to check, but how do you want to proceed?
You either need 19.11 LTS or 20.11 LTS to work properly.<= /div>

Regards,
Asaf Penso

From: Rocio Dominguez <r= ocio.dominguez@ericsson.com>
Sent: Monday, January 31, 2022 2:01:43 PM
To: Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon= (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvi= dia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Raslan Darawsh= eh <rasland@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices
 

Hi Asaf,

 

Yes, it seems that DPDK version 20.08 code is buil= t-in with the VNF I=92m deploying, so it is always using this version, whic= h apparently doesn=92t have the patch that overrides this error.

 

I think the patch is the following:

https://patches.dpdk.org/= project/dpdk/patch/20200603150602.4686-7-ophirmu@mellanox.com/

 

and the code part that solves the error is:

+&nb= sp;      if (mlx5_class_get(pci_dev->device.dev= args) !=3D MLX5_CLASS_NET) {

+&nb= sp;            =    DRV_LOG(DEBUG, "Skip probing - should be probed by other = mlx5"

+&nb= sp;            =            " driver.= ");

+&nb= sp;            =    return 1;

+&nb= sp;      }

Could you please confirm?

 

Thanks,

 

Roc=EDo

 

From: Asaf Penso <asafp@nvidia.com> <= br> Sent: Monday, January 31, 2022 12:49 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contac= t-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsi= ienko <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.c= om>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

I see two differences below.

First, in testpmd the version is 19.11.11, and in = your application, it=92s 20.08. See this print:

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0&qu= ot;}

 

Second, in your application, I see the VFIO driver= is not started properly:

20T19:19:16.637+00:00","severity":&= quot;info","service_id":"eric-pc-up-data-plane",&q= uot;metadata":{"proc_id":"6"},"message":= "[pio] EAL:   cannot open VFIO container, error 2 (No such f= ile or directory)"}

 

Regards,

Asaf Penso

 

From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Thursday, January 20, 2022 9:49 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

I have manually compile and install the DPDK 19.11= .11.

 

Executing testpmd in the Mellanox NICs VFs where I= want to run my app gives this result:

 

pcgwpod009-c04:~/dpdk-s= table-19.11.11 # ./x86_64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:0= 0.2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=3D2 --txq=3D2 -i

EAL: Detected 96 lcore(= s)

EAL: Detected 2 NUMA no= des

EAL: Multi-process sock= et /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode= 'VA'

EAL: 2048 hugepages of = size 2097152 reserved, but no mounted hugetlbfs found for that size

EAL: Probing VFIO suppo= rt...

EAL: VFIO support initi= alized

EAL: PCI device 0000:d8= :00.2 on NUMA socket 1

EAL:   probe = driver: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8= :00.3 on NUMA socket 1

EAL:   probe = driver: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8= :00.4 on NUMA socket 1

EAL:   probe = driver: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8= :00.5 on NUMA socket 1

EAL:   probe = driver: 15b3:1014 net_mlx5

Interactive-mode select= ed

testpmd: create a new m= buf pool <mbuf_pool_socket_0>: n=3D203456, size=3D2176, socket=3D0

testpmd: preferred memp= ool ops selected: ring_mp_mc

testpmd: create a new m= buf pool <mbuf_pool_socket_1>: n=3D203456, size=3D2176, socket=3D1

testpmd: preferred memp= ool ops selected: ring_mp_mc

Configuring Port 0 (soc= ket 1)

Port 0: 36:FE:F0:D2:90:= 27

Configuring Port 1 (soc= ket 1)

Port 1: 72:AC:33:BF:0A:= FA

Configuring Port 2 (soc= ket 1)

Port 2: 1E:8D:81:60:43:= E0

Configuring Port 3 (soc= ket 1)

Port 3: C2:3C:EA:94:06:= B4

Checking link statuses.= ..

Done

testpmd>

 

But when I run my Data Plane app, the result is

 

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"7&= quot;},"message":"[pktio_linux_packet_mmap_setup] block_size= : 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384, mem_size: 67108864"}<= /p>

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_libpio_init] CTRL: pci devices adde= d: 1, vhost user devices added: 0"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"7&= quot;},"message":"[add_pio_pci_devices_from_env_to_config] p= ci device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_libpio_init] CTRL: requesting 1024 = MiB of hugepage memory for DPDK"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0&qu= ot;}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] USER1: rte_eal_init() args: pio -m 1= 024 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no= -shconf "}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Detected 96 lcore(s)"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Detected 2 NUMA nodes"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Selected IOVA mode 'VA'"}<= /p>

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: 2048 hugepages of size 2097152 = reserved, but no mounted hugetlbfs found for that size"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Probing VFIO support..."}<= /p>

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL:   cannot open VFIO co= ntainer, error 2 (No such file or directory)"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: VFIO support could not be initi= alized"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Probe PCI driver: mlx5_pci (15b= 3:1014) device: 0000:d8:00.5 (socket 1)"}

{"version"= :"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:0= 0","severity":"info","service_id":"= eric-pc-up-data-plane","metadata":{"proc_id":"= ;6"},"message":"[pio] net_mlx5: unable to recognize mas= ter/representors on the multiple IB devices"}

{"version"= :"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:0= 0","severity":"info","service_id":"= eric-pc-up-data-plane","metadata":{"proc_id":"= ;6"},"message":"[pio] common_mlx5: Failed to load drive= r =3D net_mlx5."}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Requested device 0000:d8:00.5 c= annot be used"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] EAL: Bus (pci) probe failed."}<= /p>

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pio] USER1: ports init fail in DPDK, expe= ct 1 ports, actual 0 ports."}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&q= uot;,"severity":"error","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"6= "},"message":"[pktio_libpio_init] No network ports coul= d be enabled!"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_init_cpu] libpio packet module is N= OT initialized"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_init_cpu] pktsock packet module is = NOT initialized"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_init_cpu] linux packet module is in= itialized"}

{"version":&q= uot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&q= uot;,"severity":"info","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_init_cpu] tap packet module is NOT = initialized"}

 

Any idea on what could be the problem?

 

Thanks,

 

Roc=EDo

 

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, January 20, 2022 8:17 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Although inbox drivers come with a pre installed D= PDK, you can manually download, compile, install, and work with whatever ve= rsion you wish.

 

Let us know the results, and we'll continue from t= here.

 

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 17, 2022 10:20:58 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <= viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.= com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

Thanks for the prompt answer.

I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10.

I have installed it but the problem persists. It's probably solved in 19.11= .11.

There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway.

Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated.

Thanks,

Roc=EDo

-----Original Message-----
From: Asaf Penso <asafp@nvidia.com>
Sent: Sunday, January 16, 2022 4:31 PM
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <
thomas@monjalon.net>; Rocio Dominguez <rocio.dominguez@ericsson.com>
Cc: users@dpdk.org; Matan Azrad <<= a href=3D"mailto:matan@nvidia.com">matan@nvidia.com>; Slava Ovsiienk= o <viacheslavo@nvidia.com&= gt;; Raslan Darawsheh <rasland@nvi= dia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices

Hello Rocio,
IIRC, there was a fix in a recent stable version.
Would you please try taking latest 19.11 LTS and tell whether you still see= the issue?

Regards,
Asaf Penso

>-----Original Message-----
>From: Thomas Monjalon <thomas= @monjalon.net>
>Sent: Sunday, January 16, 2022 3:24 PM
>To: Rocio Dominguez <rocio.dominguez@ericsson.com>
>Cc: users@dpdk.org; Matan Azrad &= lt;matan@nvidia.com>; Slava Ovsi= ienko
><viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@n= vidia.com>
>Subject: Re: net_mlx5: unable to recognize master/representors on the <= br> >multiple IB devices
>
>+Cc mlx5 experts
>
>
>14/01/2022 11:10, Rocio Dominguez:
>> Hi,
>>
>> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.
>>
>> I'm using:
>>
>> OS SLES 15 SP2
>> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
>> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
>> Mellanox adapters firmware 12.28.2006 (corresponding to this
>> MLNX_OFED version) kernel 5.3.18-24.34-default
>>
>>
>> This is my SRIOV configuration for DPDK capable PCI slots:
>>
>>           &= nbsp; {
>>           &= nbsp;     "resourceName": "mlnx_sriov_ne= tdevice",
>>           &= nbsp;     "resourcePrefix": "mellanox.co= m",
>>           &= nbsp;     "isRdma": true,
>>           &= nbsp;     "selectors": {
>>           &= nbsp;         "vendors": = ["15b3"],
>>           &= nbsp;         "devices": = ["1014"],
>>           &= nbsp;         "drivers": = ["mlx5_core"],
>>           &= nbsp;         "pciAddresses&qu= ot;: ["0000:d8:00.2", "0000:d8:00.3",
>> "0000:d8:00.4",
>"0000:d8:00.5"],
>>           &= nbsp;         "isRdma": t= rue
>>           &= nbsp;     }
>>
>> The sriov device plugin starts without problems, the devices are <= br> >> correctly
>allocated:
>>
>> {
>>   "cpu": "92",
>>   "ephemeral-storage": "419533922385"= ;,
>>   "hugepages-1Gi": "8Gi",
>>   "hugepages-2Mi": "4Gi",
>>   "intel.com/intel_sriov_dpdk": "0",=
>>   "intel.com/sriov_cre": "3",
>>   "mellanox.com/mlnx_sriov_netdevice": "4= ",
>>   "mellanox.com/sriov_dp": "0",
>>   "memory": "183870336Ki",
>>   "pods": "110"
>> }
>>
>> The Mellanox NICs are binded to the kernel driver mlx5_core:
>>
>> pcgwpod009-c04:~ # dpdk-devbind --status
>>
>> Network devices using kernel driver
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if= =3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed'
>> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if= =3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed'
>> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci
>>
>> The interfaces are up:
>>
>> pcgwpod009-c04:~ # ibdev2netdev -v
>> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up)
>> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up)
>> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up)
>> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up)
>> 0000:d8:00.2 mlx5_4 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f2 (Up)
>> 0000:d8:00.3 mlx5_5 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f3 (Up)
>> 0000:d8:00.4 mlx5_6 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f4 (Up)
>> 0000:d8:00.5 mlx5_7 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f5 (Up) pcgwpod009-c04:~ #
>>
>>
>> But when I run my application the Mellanox adapters are probed and= I
>obtain the following error:
>>
>> {"proc_id":"6"},"message":"[pio= ] EAL: Probe PCI driver: mlx5_pci
>> (15b3:1014) device: 0000:d8:00.4 (socket 1)"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] net_mlx5: unable to= recognize
>> master/representors on the multiple IB devices"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] common_mlx5: Failed= to load driver =3D
>> net_mlx5."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Requested devi= ce 0000:d8:00.4 cannot be
>> used"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Bus (pci) prob= e failed."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] USER1: ports init f= ail in DPDK, expect 1 ports,
>> actual 0 ports."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"error","service_id":"eric-pc-up= -data-plane","metadata":{"proc_id"
>> :"6"},"message":"[pktio_libpio_init] No n= etwork ports could be
>> enabled!"}
>>
>> Could you please help me with this issue?
>>
>>
>> Thanks,
>>
>> Roc=EDo
>>
>
>
>
>

--_000_DM5PR1201MB2555C04807E5231338B5C1E3CD259DM5PR1201MB2555_--