From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65296A034F for ; Mon, 7 Feb 2022 12:46:55 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 509C6410F6; Mon, 7 Feb 2022 12:46:55 +0100 (CET) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-eopbgr150077.outbound.protection.outlook.com [40.107.15.77]) by mails.dpdk.org (Postfix) with ESMTP id 3731C40041 for ; Fri, 4 Feb 2022 13:54:23 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ROtpcaTw425E3YdbVf4+r2r2ejNVDYsWO4eYTqDMDEp7cUj2mF6PdDWp1Jgv7ERDQ5E9hNXAGyQc8k3Af/CCF1s/Tv5F8y/iKUZBMzzeuAzdKSh2a9Hedqv95/PbjdH07CGd/3LBqaIZnhnJXWg2T4FIv+jV895Y2rsp9LXby6YNgrySn92PRO5xOBQzqoq2erpIkzp9xWIzHajSicubUfzkjsuClTrFSUfE6np900ggBgPc9sSFkuiIeg2BmntIY9QteXwFIczuMck4A5peKv//HzQbk73W2gb0OeucWPIkNPI3Rty0B7jdqjLOIhwPpnZErrMUuB0HRWCWygKJfw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IhO7nwgVSnUtLT7ZeqZdJHUdfqDfkQb0tJH+Bgk52NY=; b=lpA/sys7PGMJuKUHk+ckfljudaZfEZP/E3yhzRC6awKipR9EM32+pFyMwgwXKRaqByUgaSOxGeaq4ycI2L3P99cAxntyD7F2zSYeew6tW81TblW+TYj+ubemgBY+TAJ2CwrxdX1bU1O9+RoLuZaDkgPa3LVPsmeLX0YBr1CAyAGeGV1wXO4DrU8dbEYfF7FgfgsuyaNO94Vb8aeM5vkIBqXdwUAK0dINPMKks2zQLoMe5Tx1dZu7CMFGrAlD/jeKOfNNYlD49qaXRRkCb6R8waqzDXkRNXXs/NXoa1ON2mpoaxr8zjLhNbGAge2w0cJ70orIduc2sI2QgC8onz2Vcg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IhO7nwgVSnUtLT7ZeqZdJHUdfqDfkQb0tJH+Bgk52NY=; b=PaOeowts+2QUVYDkp1r8iSuj1MXZfaPFngIz3YwSdHvUwnql1RmzHMtm76hK9ZnMrmJtalscwkNQB4dpltALIJ2u/5a6pRc5/UJINjc+1jQDGyh6/Cyd8Ilzocy/V5VOjhlel0IkDPdgNkqk5PIZ2W4EGDy4DQ4idJzS0Vwl5aw= Received: from AM5PR0701MB2324.eurprd07.prod.outlook.com (2603:10a6:203:e::13) by VI1PR0701MB2351.eurprd07.prod.outlook.com (2603:10a6:800:6b::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.16; Fri, 4 Feb 2022 12:54:20 +0000 Received: from AM5PR0701MB2324.eurprd07.prod.outlook.com ([fe80::4d9c:f6b3:87c6:7573]) by AM5PR0701MB2324.eurprd07.prod.outlook.com ([fe80::4d9c:f6b3:87c6:7573%12]) with mapi id 15.20.4975.005; Fri, 4 Feb 2022 12:54:20 +0000 From: Rocio Dominguez To: Asaf Penso , "NBU-Contact-Thomas Monjalon (EXTERNAL)" , Ferruh Yigit , Qi Zhang CC: "users@dpdk.org" , Matan Azrad , Slava Ovsiienko , Raslan Darawsheh Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices Thread-Topic: net_mlx5: unable to recognize master/representors on the multiple IB devices Thread-Index: AdgJLQTmyQJM7lgBQqSyceNdxLJR4QBr0gyAAARnZ6AAI+HbQACUBFcNABidwCACGgPv4AAAJe0wAArmI28AiMSY4AABCE1QADZG7IA= Date: Fri, 4 Feb 2022 12:54:20 +0000 Message-ID: References: <37327910.J2Yia2DhmK@thomas> In-Reply-To: Accept-Language: es-ES, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=ericsson.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: f1232bfb-71dd-4a36-9290-08d9e7dd7606 x-ms-traffictypediagnostic: VI1PR0701MB2351:EE_ x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2582; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: pyR1o0yDMuraeEw8gGrjZ4t+Pm8Dijk8lPhWhai0BnIX+UPea5BNyYRsTymuuAkpdDvaAfEazbtSB5uUzeJkoDBebBjmJDHi98G4ZH0m0ACsO/zPcnxAK2bkMYin3HmnawxBRcUWENBUZIdtZ2r4BhKxU+TwOt1evCXrrjfpNK6EISZxxl6fxQRiKa7Wtw3aBRY+rGNH1yi///4H/qoXY1PkUJobfu9slkeDU6Xz0MAd8NJ9V75b+5DnG59VWaP+IpT9fhudtDk8HlUdhBoctHIszkjXB70q5wRf80BaS25q+dI5mA3Ksrz//Sgp4B2NR65IIRZ0Xq437DcN6LezFvgzmdkZ/VmW5Zgn4oFUV5aPXC7sQLUozzp3qOB5JYG5PJ46j6SNMqcaIFv12ntIzFAP7j8WBGY88SIqHuMJoEb2G4II72Q/lYnV0TMdYGbvUbRT/MKN1yqNTY9BoJyNcyHquQZ3ZUPbwidfRHiXFT2axCToMN0cZ5RtDJL7po6L5xDGZRWiHBMY+Ks50Cg/0HpksVc8sTzVUU5OLwjuIxD8JFgc2UArlqllRmQWb7wdbOcG33tMnAGoEFek6CzXJPXh28hk2wHY+wpOvF+RdbIPaHOMUZCJ+ys3RjOzRNJWm6oU2aCU8uqU4nRe0k06ARNAFiwAgohwpi+tNtvZ/vsA0C4nTCN5sHWG4AGQP3YaR6hCOPsom/KYV/c1ZzgG6B6IqQz8F5sLgzAhinFw1NL4u9yHR0NWFuC0c0Ulg/HzvZJk5f7NeXWt6Q7WtaGCNHpX5XGYV7fxGCHt+2xs77g= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AM5PR0701MB2324.eurprd07.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(64756008)(8936002)(66446008)(66476007)(66556008)(2906002)(8676002)(4326008)(83380400001)(5660300002)(30864003)(66946007)(9326002)(52536014)(76116006)(26005)(186003)(9686003)(33656002)(6506007)(7696005)(53546011)(66574015)(71200400001)(86362001)(166002)(55016003)(38070700005)(316002)(966005)(44832011)(508600001)(38100700002)(122000001)(110136005)(82960400001)(54906003)(559001)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?IuAuxNTg0+o+LWSm2U3eAaAR/an/+ocdbwsOyBDLh8WXQZv87kYQ6C+h25?= =?iso-8859-1?Q?kxg3MkzxsRMt/wA2mkA5/dJB1z445N1f0W0d8Ukvk8xENDTZryOcW8pkdP?= =?iso-8859-1?Q?J7nolBHkavP6inerfkbKtwsm8NvP9E3kkEqIMU6Zqgj1vtUObRRWOqZ81l?= =?iso-8859-1?Q?YUqHjW05Od5YFXFLveCGFSFA8nwra6oxJMMIVcZLVipekOv+eoKyKQmH96?= =?iso-8859-1?Q?1jrPmmPeSUvo4lsbJ6TP+7mwdmZoqe1kn2Ak5PE30abzsLDFj1jYxFPVix?= =?iso-8859-1?Q?SczWWwtMmyXNMJ+bOiQevLihR05rSJN4tInaBlDcTdMll576RHoE3qzgCr?= =?iso-8859-1?Q?WmRTh0ms/iF0KpzniZvYO0eL9syt1bsfQVDE9cbPxIwWp8b2N8HO4US7pP?= =?iso-8859-1?Q?ajTNiZ+X2kuU2IktYXlWBrJZDst7NSx5Q6Mbow6CHagCdxX5WzcsxDaWsi?= =?iso-8859-1?Q?oRndtgfH8X0aDYpjhZUd95Q9svD20FZ/76lx6qGSdk9+9AATs3dV2iIfcH?= =?iso-8859-1?Q?7CnF9q7AV8T2AME7obWxIshfyaMzayRajPAHqXtaTriN75yZEEDtTNVGZB?= =?iso-8859-1?Q?4JcNOJ5hBBhu9Qzl+XIW02fSX5x39hI9XhyLF4HrPpBvNBz5m/6ItJqTwT?= =?iso-8859-1?Q?c3qp2Ig2tA9ZI69GrQ0hLkP9XPeknqZR3F2iacqD+tzZuoDE6uHjX1YC2V?= =?iso-8859-1?Q?xFTqMJ9Aw0wiAMMaqV2G7HAgldMyFXbi4HvUCQWTkzr9ELc/hni7nVFHZQ?= =?iso-8859-1?Q?7+IaPIbn7Cs9C61356TJA+qE5KbNxmYXmOcnCiISBlPE0xtz6p6WZBXMDu?= =?iso-8859-1?Q?mFN55ZEkKpbwS0tc/McXJQ27+pwY/Jz+7oBb/PPEpXGXUCHjIdTcZQmeLj?= =?iso-8859-1?Q?6ajNJkXAJGoFeipd7+8ud+gDWIET2NN/nWXfP9lbSU5hwVjR0r9gNs6aX9?= =?iso-8859-1?Q?coVJkXq/mDwQXa8M9i13j6ngZodkp2BYzF5GGnqEBfaFD7Y4t1BS5CW4Yr?= =?iso-8859-1?Q?+U+s0RZv1pBQHt4Om7MWlEneIXbrUxoLSIoah52/yapmyNM9wpVEHQM/dM?= =?iso-8859-1?Q?x6qPWN4nwV+5zvHsHkP6gGY3cfng+PMyjyyNNUVEsaW6d7ukXkCbPhAQ5h?= =?iso-8859-1?Q?wplh7Bl9CEc0pWlGkFrQxl0b++nUXUeaNIYEAhfC2qNgNN+RH3e21U5r3e?= =?iso-8859-1?Q?hzKsVbica/2ngDFz04nkrqTRIYzAZ8MOF28BrQ3GBb+x742a/zJ7I21dgw?= =?iso-8859-1?Q?EOGqT7ctHV5ChlzDj7ztU5l/x8mhxyLNn55EO1plkwgT+GS4Csnc7jZzIH?= =?iso-8859-1?Q?w0Om/+GD3kLmfEtn49JRQwfgLRvVkHOsS4en/44UlnbIIfM413G9pYrGtB?= =?iso-8859-1?Q?eiwLvKYMI2rsnCh64CC2LvAvGvS4oNqo8xGVVX0CMBw71aanZQuwR4lMM8?= =?iso-8859-1?Q?K17Imm2icgY6lYmZIUyB88axHqeK01z+kj5JtF9zOUhCkHJBNVwTC6J0yq?= =?iso-8859-1?Q?yeJiIDWeJ8asuagUbhBkfeKnI+iM9ZQMhhdYHUnlxBHtp7NVOAc60SSh8l?= =?iso-8859-1?Q?zERN4Qpj9tj8jjH5iXRGC3tzy+OgMScsNWvxmi9PSOo8ItTH1EdhbyfODN?= =?iso-8859-1?Q?5/Ju66y72Kkib/p8i633HNhRVu9fSTtxnTh3LAejkqOmhQtWhMU7LKMFRC?= =?iso-8859-1?Q?mRUSMflTUnvU/4qHAwIzC98Fhwzq7WeUGQ5SwthW5Y07WDO07Uelng5aVT?= =?iso-8859-1?Q?QfyA=3D=3D?= Content-Type: multipart/alternative; boundary="_000_AM5PR0701MB2324C5DAA22F90E1878E595793299AM5PR0701MB2324_" MIME-Version: 1.0 X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: AM5PR0701MB2324.eurprd07.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: f1232bfb-71dd-4a36-9290-08d9e7dd7606 X-MS-Exchange-CrossTenant-originalarrivaltime: 04 Feb 2022 12:54:20.2941 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: H0w45T4UsjtvRh504UJqaiGI3dvofoEEE8Fkq56ZGV/gG7r3Ho4g5ALmHC3HxKxpTRJOGRcj8Un88A8/22EVJB6S2fyCSXdZK2s5gHHiCI0= X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0701MB2351 X-Mailman-Approved-At: Mon, 07 Feb 2022 12:46:53 +0100 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_AM5PR0701MB2324C5DAA22F90E1878E595793299AM5PR0701MB2324_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hi Asaf, Finally I solved the problem with Intel NICs. I am using Dual NUMA, and I r= ealized that my application is using cpus from NUMA 0 while I was assigning= a NIC from NUMA 1. Using a NIC from NUMA 0 solved the problem. I don't know if the problem with Mellanox NICs could be solved in the same = way. But for the moment, we will use Intel NICs. Thanks, Roc=EDo From: Asaf Penso Sent: Thursday, February 3, 2022 11:50 AM To: Rocio Dominguez ; NBU-Contact-Thomas Monj= alon (EXTERNAL) ; Ferruh Yigit ; Qi Zhang Cc: users@dpdk.org; Matan Azrad ; Slava Ovsiienko ; Raslan Darawsheh Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hello Rocio, For Intel's NIC it would be better to take it with @Ferruh Yigit/@Qi Zhang For Nvidia's let's continue together. Regards, Asaf Penso From: Rocio Dominguez > Sent: Thursday, February 3, 2022 12:30 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, We have replaced the Mellanox NICs by Intel NICs trying to avoid this probl= em, but it's not working also, this time with the following error: {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"8"},"mess= age":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_IN= TEL_COM_INTEL_SRIOV_DPDK=3D0000:d8:02.1 found"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for = DPDK"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --f= ile-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d= 8:02.1 --base-virtaddr=3D0x200000000 --legacy-mem --no-shconf "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Detected 96 lcore(s)"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Detected 2 NUMA nodes"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Selected IOVA mode 'VA'"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hu= getlbfs found for that size"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Probing VFIO support..."} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: VFIO support initialized"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: using IOMMU type 1 (Type 1)"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:d8:= 02.1 (socket 1)"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Releasing pci mapped resource for 0000:d8:02.1"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40000000= "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40010000= "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Requested device 0000:d8:02.1 cannot be used"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Bus (pci) probe failed."} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports.= "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"= error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mes= sage":"[pktio_libpio_init] No network ports could be enabled!"} As using Intel NICs, now I have create the VFs and bind them to vfio-pci dr= iver pcgwpod009-c04:~ # dpdk-devbind --status N Network devices using DPDK-compatible driver =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 0000:d8:02.0 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf 0000:d8:02.1 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf 0000:d8:02.2 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf 0000:d8:02.3 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf Network devices using kernel driver =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:3b:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp1p1 dr= v=3Di40e unused=3Dvfio-pci 0000:3b:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp1p2 dr= v=3Di40e unused=3Dvfio-pci 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p= 1 drv=3Dixgbe unused=3Dvfio-pci 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p= 2 drv=3Dixgbe unused=3Dvfio-pci 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3Dp3p1_0 = drv=3Dixgbevf unused=3Dvfio-pci 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=3Dp3p1_1 = drv=3Dixgbevf unused=3Dvfio-pci 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D drv=3D= ixgbevf unused=3Dvfio-pci 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' if=3D drv=3D= ixgbevf unused=3Dvfio-pci 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p= 1 drv=3Dixgbe unused=3Dvfio-pci 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p= 2 drv=3Dixgbe unused=3Dvfio-pci 0000:d8:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp8p1 dr= v=3Di40e unused=3Dvfio-pci 0000:d8:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp8p2 dr= v=3Di40e unused=3Dvfio-pci The interfaces are up: pcgwpod009-c04:~ # ip link show dev p8p1 290: p8p1: mtu 1500 qdisc mq state UP mod= e DEFAULT group default qlen 1000 link/ether 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off pcgwpod009-c04:~ # The testpmd is working: pcgwpod009-c04:~ # testpmd -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2 -w= d8:02.3 -- --rxq=3D2 --txq=3D2 -i EAL: Detected 96 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs foun= d for that size EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:d8:02.0 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: using IOMMU type 1 (Type 1) EAL: PCI device 0000:d8:02.1 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: PCI device 0000:d8:02.2 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: PCI device 0000:d8:02.3 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf Interactive-mode selected testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 1) Port 0: FE:72:DB:BE:05:EF Configuring Port 1 (socket 1) Port 1: 5E:C5:3E:86:1A:84 Configuring Port 2 (socket 1) Port 2: 42:F0:5D:B0:1F:B3 Configuring Port 3 (socket 1) Port 3: 46:00:42:2F:A2:DE Checking link statuses... Done testpmd> Any idea on what could be causing the error this time? Thanks, Roc=EDo From: Asaf Penso > Sent: Monday, January 31, 2022 6:02 PM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: Re: net_mlx5: unable to recognize master/representors on the multi= ple IB devices We'll need to check, but how do you want to proceed? You either need 19.11 LTS or 20.11 LTS to work properly. Regards, Asaf Penso ________________________________ From: Rocio Dominguez > Sent: Monday, January 31, 2022 2:01:43 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org >; Matan Azrad >; Slava Ovsi= ienko >; Raslan Daraw= sheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, Yes, it seems that DPDK version 20.08 code is built-in with the VNF I'm dep= loying, so it is always using this version, which apparently doesn't have t= he patch that overrides this error. I think the patch is the following: https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-ophirmu@m= ellanox.com/ and the code part that solves the error is: + if (mlx5_class_get(pci_dev->device.devargs) !=3D MLX5_CLASS_NET) { + DRV_LOG(DEBUG, "Skip probing - should be probed by other m= lx5" + " driver."); + return 1; + } Could you please confirm? Thanks, Roc=EDo From: Asaf Penso > Sent: Monday, January 31, 2022 12:49 PM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices I see two differences below. First, in testpmd the version is 19.11.11, and in your application, it's 20= .08. See this print: {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} Second, in your application, I see the VFIO driver is not started properly: 20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plan= e","metadata":{"proc_id":"6"},"message":"[pio] EAL: cannot open VFIO cont= ainer, error 2 (No such file or directory)"} Regards, Asaf Penso From: Rocio Dominguez > Sent: Thursday, January 20, 2022 9:49 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, I have manually compile and install the DPDK 19.11.11. Executing testpmd in the Mellanox NICs VFs where I want to run my app gives= this result: pcgwpod009-c04:~/dpdk-stable-19.11.11 # ./x86_64-native-linux-gcc/app/testp= md -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=3D2 --= txq=3D2 -i EAL: Detected 96 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs foun= d for that size EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:d8:00.2 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.3 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.4 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.5 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 Interactive-mode selected testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 1) Port 0: 36:FE:F0:D2:90:27 Configuring Port 1 (socket 1) Port 1: 72:AC:33:BF:0A:FA Configuring Port 2 (socket 1) Port 2: 1E:8D:81:60:43:E0 Configuring Port 3 (socket 1) Port 3: C2:3C:EA:94:06:B4 Checking link statuses... Done testpmd> But when I run my Data Plane app, the result is {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pktio_linux_packet_mmap_setup] block_size: 67108864, frame_size: 409= 6, block_nr: 1, frame_nr: 16384, mem_size: 67108864"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_libpio_init] CTRL: pci devices added: 1, vhost user devices ad= ded: 0"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_ME= LLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for = DPDK"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --f= ile-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d= 8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no-shc= onf "} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Detected 96 lcore(s)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Detected 2 NUMA nodes"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Selected IOVA mode 'VA'"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hu= getlbfs found for that size"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Probing VFIO support..."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: cannot open VFIO container, error 2 (No such file or dir= ectory)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: VFIO support could not be initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.= 5 (socket 1)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] net_mlx5: unable to recognize master/representors on the multip= le IB devices"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] common_mlx5: Failed to load driver =3D net_mlx5."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Requested device 0000:d8:00.5 cannot be used"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Bus (pci) probe failed."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports.= "} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mes= sage":"[pktio_libpio_init] No network ports could be enabled!"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] libpio packet module is NOT initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] pktsock packet module is NOT initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] linux packet module is initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] tap packet module is NOT initialized"} Any idea on what could be the problem? Thanks, Roc=EDo From: Asaf Penso > Sent: Thursday, January 20, 2022 8:17 AM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: Re: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Although inbox drivers come with a pre installed DPDK, you can manually dow= nload, compile, install, and work with whatever version you wish. Let us know the results, and we'll continue from there. Regards, Asaf Penso ________________________________ From: Rocio Dominguez > Sent: Monday, January 17, 2022 10:20:58 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org >; Matan Azrad >; Slava Ovsi= ienko >; Raslan Daraw= sheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, Thanks for the prompt answer. I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10. I have installed it but the problem persists. It's probably solved in 19.11= .11. There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway. Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated. Thanks, Roc=EDo -----Original Message----- From: Asaf Penso > Sent: Sunday, January 16, 2022 4:31 PM To: NBU-Contact-Thomas Monjalon (EXTERNAL) >; Rocio Dominguez > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hello Rocio, IIRC, there was a fix in a recent stable version. Would you please try taking latest 19.11 LTS and tell whether you still see= the issue? Regards, Asaf Penso >-----Original Message----- >From: Thomas Monjalon > >Sent: Sunday, January 16, 2022 3:24 PM >To: Rocio Dominguez > >Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >>; Raslan Darawsheh = > >Subject: Re: net_mlx5: unable to recognize master/representors on the >multiple IB devices > >+Cc mlx5 experts > > >14/01/2022 11:10, Rocio Dominguez: >> Hi, >> >> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs. >> >> I'm using: >> >> OS SLES 15 SP2 >> DPDK 19.11.4 (the official supported version for SLES 15 SP2) >> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one) >> Mellanox adapters firmware 12.28.2006 (corresponding to this >> MLNX_OFED version) kernel 5.3.18-24.34-default >> >> >> This is my SRIOV configuration for DPDK capable PCI slots: >> >> { >> "resourceName": "mlnx_sriov_netdevice", >> "resourcePrefix": "mellanox.com", >> "isRdma": true, >> "selectors": { >> "vendors": ["15b3"], >> "devices": ["1014"], >> "drivers": ["mlx5_core"], >> "pciAddresses": ["0000:d8:00.2", "0000:d8:00.3", >> "0000:d8:00.4", >"0000:d8:00.5"], >> "isRdma": true >> } >> >> The sriov device plugin starts without problems, the devices are >> correctly >allocated: >> >> { >> "cpu": "92", >> "ephemeral-storage": "419533922385", >> "hugepages-1Gi": "8Gi", >> "hugepages-2Mi": "4Gi", >> "intel.com/intel_sriov_dpdk": "0", >> "intel.com/sriov_cre": "3", >> "mellanox.com/mlnx_sriov_netdevice": "4", >> "mellanox.com/sriov_dp": "0", >> "memory": "183870336Ki", >> "pods": "110" >> } >> >> The Mellanox NICs are binded to the kernel driver mlx5_core: >> >> pcgwpod009-c04:~ # dpdk-devbind --status >> >> Network devices using kernel driver >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci >> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci >> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3D >> drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' >> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D >> drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' >> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci >> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci >> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci >> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci >> >> The interfaces are up: >> >> pcgwpod009-c04:~ # ibdev2netdev -v >> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up) >> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up) >> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up) >> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up) >> 0000:d8:00.2 mlx5_4 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f2 (Up) >> 0000:d8:00.3 mlx5_5 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f3 (Up) >> 0000:d8:00.4 mlx5_6 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f4 (Up) >> 0000:d8:00.5 mlx5_7 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f5 (Up) pcgwpod009-c04:~ # >> >> >> But when I run my application the Mellanox adapters are probed and I >obtain the following error: >> >> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci >> (15b3:1014) device: 0000:d8:00.4 (socket 1)"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] net_mlx5: unable to recognize >> master/representors on the multiple IB devices"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] common_mlx5: Failed to load driver =3D >> net_mlx5."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] EAL: Requested device 0000:d8:00.4 cannot be >> used"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] EAL: Bus (pci) probe failed."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports, >> actual 0 ports."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever >> i >> ty":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id" >> :"6"},"message":"[pktio_libpio_init] No network ports could be >> enabled!"} >> >> Could you please help me with this issue? >> >> >> Thanks, >> >> Roc=EDo >> > > > > --_000_AM5PR0701MB2324C5DAA22F90E1878E595793299AM5PR0701MB2324_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable

Hi Asaf,

 

Finally I solved the problem with Intel NICs. I am u= sing Dual NUMA, and I realized that my application is using cpus from NUMA = 0 while I was assigning a NIC from NUMA 1. Using a NIC from NUMA 0 solved t= he problem.

 

I don’t know if the problem with Mellanox NICs= could be solved in the same way. But for the moment, we will use Intel NIC= s.

 

Thanks,

 

Roc=EDo

 

From: Asaf Penso <asafp@nvidia.com> Sent: Thursday, February 3, 2022 11:50 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contac= t-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>; Ferruh Yigit <= ferruh.yigit@intel.com>; Qi Zhang <qi.z.zhang@intel.com>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsi= ienko <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.c= om>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hello Rocio,

 

For Intel’s NIC it would be better to take it = with @Ferruh Yigit/@Qi Zhang

For Nvidia’s let’s continue together.

 

Regards,

Asaf Penso

 

From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Thursday, February 3, 2022 12:30 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

We have replaced the Mellanox NICs by Intel NICs try= ing to avoid this problem, but it’s not working also, this time with = the following error:

 

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"8&qu= ot;},"message":"[add_pio_pci_devices_from_env_to_config] pci= device from PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK=3D0000:d8:02.1 found"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pktio_libpio_init] CTRL: requesting 1024 Mi= B of hugepage memory for DPDK"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0"= ;}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] USER1: rte_eal_init() args: pio -m 102= 4 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:02.1 --base-virtaddr=3D0x200000000 --legacy-mem --no-shconf "}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Detected 96 lcore(s)"}<= /o:p>

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Detected 2 NUMA nodes"}=

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: 2048 hugepages of size 2097152 re= served, but no mounted hugetlbfs found for that size"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Probing VFIO support..."}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: VFIO support initialized"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL:   using IOMMU type 1 (T= ype 1)"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Probe PCI driver: net_i40e_vf = (8086:154c) device: 0000:d8:02.1 (socket 1)"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Releasing pci mapped resource = for 0000:d8:02.1"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Calling pci_unmap_resource for= 0000:d8:02.1 at 0xa40000000"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Calling pci_unmap_resource for= 0000:d8:02.1 at 0xa40010000"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Requested device 0000:d8:02.1 can= not be used"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Bus (pci) probe failed."}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] USER1: ports init fail in DPDK, expect= 1 ports, actual 0 ports."}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00&quo= t;,"severity":"error","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"7&q= uot;},"message":"[pktio_libpio_init] No network ports could = be enabled!"}

 

As using Intel NICs, now I have create the VFs and b= ind them to vfio-pci driver

 

pcgwpod009-c04:~ # dpdk-d= evbind --status

 

N Network devices using D= PDK-compatible driver

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

0000:d8:02.0 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.1 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.2 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.3 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

 

Network devices using ker= nel driver

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D

0000:18:00.0 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem1 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:18:00.1 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem2 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:19:00.0 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem3 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:19:00.1 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem4 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:3b:00.0 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp1p1 drv=3Di40e unused=3Dvfio-pc= i

0000:3b:00.1 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp1p2 drv=3Di40e unused=3Dvfio-pc= i

0000:5e:00.0 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p1 drv=3Dixgbe unused=3Dvf= io-pci

0000:5e:00.1 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p2 drv=3Dixgbe unused=3Dvf= io-pci

0000:5e:10.0 '82599 Ether= net Controller Virtual Function 10ed' if=3Dp3p1_0 drv=3Dixgbevf unused=3Dvf= io-pci

0000:5e:10.2 '82599 Ether= net Controller Virtual Function 10ed' if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvf= io-pci

0000:5e:10.4 '82599 Ether= net Controller Virtual Function 10ed' if=3D drv=3Dixgbevf unused=3Dvfio-pci=

0000:5e:10.6 '82599 Ether= net Controller Virtual Function 10ed' if=3D drv=3Dixgbevf unused=3Dvfio-pci=

0000:af:00.0 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p1 drv=3Dixgbe unused=3Dvf= io-pci

0000:af:00.1 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p2 drv=3Dixgbe unused=3Dvf= io-pci

0000:d8:00.0 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp8p1 drv=3Di40e unused=3Dvfio-pc= i

0000:d8:00.1 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp8p2 drv=3Di40e unused=3Dvfio-pc= i

 

The interfaces are up:

 

pcgwpod009-c04:~ # ip lin= k show dev p8p1

290: p8p1: <BROADCAST,= MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group def= ault qlen 1000

    link/e= ther 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff

    vf 0&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

    vf 1&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

    vf 2&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

    vf 3&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

pcgwpod009-c04:~ #

 

The testpmd is working:

 

pcgwpod009-c04:~ # testpm= d -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2 -w d8:02.3 -- --rxq=3D2 --t= xq=3D2 -i

EAL: Detected 96 lcore(s)=

EAL: Detected 2 NUMA node= s

EAL: Multi-process socket= /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode '= VA'

EAL: 2048 hugepages of si= ze 2097152 reserved, but no mounted hugetlbfs found for that size

EAL: Probing VFIO support= ...

EAL: VFIO support initial= ized

EAL: PCI device 0000:d8:0= 2.0 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

EAL:   using IO= MMU type 1 (Type 1)

EAL: PCI device 0000:d8:0= 2.1 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

EAL: PCI device 0000:d8:0= 2.2 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

EAL: PCI device 0000:d8:0= 2.3 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

Interactive-mode selected=

testpmd: create a new mbu= f pool <mbuf_pool_socket_0>: n=3D203456, size=3D2176, socket=3D0=

testpmd: preferred mempoo= l ops selected: ring_mp_mc

testpmd: create a new mbu= f pool <mbuf_pool_socket_1>: n=3D203456, size=3D2176, socket=3D1=

testpmd: preferred mempoo= l ops selected: ring_mp_mc

Configuring Port 0 (socke= t 1)

Port 0: FE:72:DB:BE:05:EF=

Configuring Port 1 (socke= t 1)

Port 1: 5E:C5:3E:86:1A:84=

Configuring Port 2 (socke= t 1)

Port 2: 42:F0:5D:B0:1F:B3=

Configuring Port 3 (socke= t 1)

Port 3: 46:00:42:2F:A2:DE=

Checking link statuses...=

Done

testpmd>

 

Any idea on what could be causing the error this tim= e?

 

Thanks,

 

Roc=EDo

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 6:02 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

We'll need to check, but how do you want to proceed?=

You either need 19.11 LTS or 20.11 LTS to work prope= rly.

 

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 31, 2022 2:01:43 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <= viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.= com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

Yes, it seems that DPDK version 20.08 code is built= -in with the VNF I’m deploying, so it is always using this version, w= hich apparently doesn’t have the patch that overrides this error.

 

I think the patch is the following:

https://patches.dpdk.org/project/dpd= k/patch/20200603150602.4686-7-ophirmu@mellanox.com/

 

and the code part that solves the error is:

+ = ;      if (mlx5_class_get(pci_dev->device.devar= gs) !=3D MLX5_CLASS_NET) {

+ = ;            &n= bsp;  DRV_LOG(DEBUG, "Skip probing - should be probed by other ml= x5"

+ = ;            &n= bsp;          " driver.&q= uot;);

+ = ;            &n= bsp;  return 1;

+ = ;      }

Could you please confirm?

 

Thanks,

 

Roc=EDo

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 12:49 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

I see two differences below.

First, in testpmd the version is 19.11.11, and in y= our application, it’s 20.08. See this print:

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0&quo= t;}

 

Second, in your application, I see the VFIO driver = is not started properly:

20T19:19:16.637+00:00","severity":&q= uot;info","service_id":"eric-pc-up-data-plane",&qu= ot;metadata":{"proc_id":"6"},"message":&= quot;[pio] EAL:   cannot open VFIO container, error 2 (No such fi= le or directory)"}

 

Regards,

Asaf Penso

 

From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Thursday, January 20, 2022 9:49 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

I have manually compile and install the DPDK 19.11.= 11.

 

Executing testpmd in the Mellanox NICs VFs where I = want to run my app gives this result:

 

pcgwpod009-c04:~/dpdk-st= able-19.11.11 # ./x86_64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:00= .2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=3D2 --txq=3D2 -i

EAL: Detected 96 lcore(s= )

EAL: Detected 2 NUMA nod= es

EAL: Multi-process socke= t /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode = 'VA'

EAL: 2048 hugepages of s= ize 2097152 reserved, but no mounted hugetlbfs found for that size

EAL: Probing VFIO suppor= t...

EAL: VFIO support initia= lized

EAL: PCI device 0000:d8:= 00.2 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:= 00.3 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:= 00.4 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:= 00.5 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

Interactive-mode selecte= d

testpmd: create a new mb= uf pool <mbuf_pool_socket_0>: n=3D203456, size=3D2176, socket=3D0

testpmd: preferred mempo= ol ops selected: ring_mp_mc

testpmd: create a new mb= uf pool <mbuf_pool_socket_1>: n=3D203456, size=3D2176, socket=3D1

testpmd: preferred mempo= ol ops selected: ring_mp_mc

Configuring Port 0 (sock= et 1)

Port 0: 36:FE:F0:D2:90:2= 7

Configuring Port 1 (sock= et 1)

Port 1: 72:AC:33:BF:0A:F= A

Configuring Port 2 (sock= et 1)

Port 2: 1E:8D:81:60:43:E= 0

Configuring Port 3 (sock= et 1)

Port 3: C2:3C:EA:94:06:B= 4

Checking link statuses..= .

Done

testpmd>

 

But when I run my Data Plane app, the result is

 

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"7&q= uot;},"message":"[pktio_linux_packet_mmap_setup] block_size:= 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384, mem_size: 67108864"}<= o:p>

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_libpio_init] CTRL: pci devices added= : 1, vhost user devices added: 0"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"7&q= uot;},"message":"[add_pio_pci_devices_from_env_to_config] pc= i device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_libpio_init] CTRL: requesting 1024 M= iB of hugepage memory for DPDK"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0&quo= t;}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: rte_eal_init() args: pio -m 10= 24 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no= -shconf "}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Detected 96 lcore(s)"}=

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Detected 2 NUMA nodes"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: 2048 hugepages of size 2097152 r= eserved, but no mounted hugetlbfs found for that size"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Probing VFIO support..."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL:   cannot open VFIO con= tainer, error 2 (No such file or directory)"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: VFIO support could not be initia= lized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3= :1014) device: 0000:d8:00.5 (socket 1)"}

{"version":= "0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00= ","severity":"info","service_id":"e= ric-pc-up-data-plane","metadata":{"proc_id":"= 6"},"message":"[pio] net_mlx5: unable to recognize mast= er/representors on the multiple IB devices"}

{"version":= "0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00= ","severity":"info","service_id":"e= ric-pc-up-data-plane","metadata":{"proc_id":"= 6"},"message":"[pio] common_mlx5: Failed to load driver= =3D net_mlx5."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Requested device 0000:d8:00.5 ca= nnot be used"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Bus (pci) probe failed."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: ports init fail in DPDK, expec= t 1 ports, actual 0 ports."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"error","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_libpio_init] No network ports could= be enabled!"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] libpio packet module is NO= T initialized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] pktsock packet module is N= OT initialized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] linux packet module is ini= tialized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] tap packet module is NOT i= nitialized"}

 

Any idea on what could be the problem?

 

Thanks,

 

Roc=EDo

 

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, January 20, 2022 8:17 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Although inbox drivers come with a pre installed DP= DK, you can manually download, compile, install, and work with whatever ver= sion you wish.

 

Let us know the results, and we'll continue from th= ere.

 

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 17, 2022 10:20:58 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <= viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.= com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

Thanks for the prompt answer.

I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10.

I have installed it but the problem persists. It's probably solved in 19.11= .11.

There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway.

Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated.

Thanks,

Roc=EDo

-----Original Message-----
From: Asaf Penso <asafp@nvidia.com>
Sent: Sunday, January 16, 2022 4:31 PM
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <
thomas@monjalon.net>; Rocio Dominguez <rocio.dominguez@ericsson.com>
Cc: users@dpdk.org; Matan Azrad <<= a href=3D"mailto:matan@nvidia.com">matan@nvidia.com>; Slava Ovsiienk= o <viacheslavo@nvidia.com&= gt;; Raslan Darawsheh <rasland@nvi= dia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices

Hello Rocio,
IIRC, there was a fix in a recent stable version.
Would you please try taking latest 19.11 LTS and tell whether you still see= the issue?

Regards,
Asaf Penso

>-----Original Message-----
>From: Thomas Monjalon <thomas= @monjalon.net>
>Sent: Sunday, January 16, 2022 3:24 PM
>To: Rocio Dominguez <rocio.dominguez@ericsson.com>
>Cc: users@dpdk.org; Matan Azrad &= lt;matan@nvidia.com>; Slava Ovsi= ienko
><viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@n= vidia.com>
>Subject: Re: net_mlx5: unable to recognize master/representors on the <= br> >multiple IB devices
>
>+Cc mlx5 experts
>
>
>14/01/2022 11:10, Rocio Dominguez:
>> Hi,
>>
>> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.
>>
>> I'm using:
>>
>> OS SLES 15 SP2
>> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
>> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
>> Mellanox adapters firmware 12.28.2006 (corresponding to this
>> MLNX_OFED version) kernel 5.3.18-24.34-default
>>
>>
>> This is my SRIOV configuration for DPDK capable PCI slots:
>>
>>           &= nbsp; {
>>           &= nbsp;     "resourceName": "mlnx_sriov_ne= tdevice",
>>           &= nbsp;     "resourcePrefix": "mellanox.co= m",
>>           &= nbsp;     "isRdma": true,
>>           &= nbsp;     "selectors": {
>>           &= nbsp;         "vendors": = ["15b3"],
>>           &= nbsp;         "devices": = ["1014"],
>>           &= nbsp;         "drivers": = ["mlx5_core"],
>>           &= nbsp;         "pciAddresses&qu= ot;: ["0000:d8:00.2", "0000:d8:00.3",
>> "0000:d8:00.4",
>"0000:d8:00.5"],
>>           &= nbsp;         "isRdma": t= rue
>>           &= nbsp;     }
>>
>> The sriov device plugin starts without problems, the devices are <= br> >> correctly
>allocated:
>>
>> {
>>   "cpu": "92",
>>   "ephemeral-storage": "419533922385"= ;,
>>   "hugepages-1Gi": "8Gi",
>>   "hugepages-2Mi": "4Gi",
>>   "intel.com/intel_sriov_dpdk": "0",=
>>   "intel.com/sriov_cre": "3",
>>   "mellanox.com/mlnx_sriov_netdevice": "4= ",
>>   "mellanox.com/sriov_dp": "0",
>>   "memory": "183870336Ki",
>>   "pods": "110"
>> }
>>
>> The Mellanox NICs are binded to the kernel driver mlx5_core:
>>
>> pcgwpod009-c04:~ # dpdk-devbind --status
>>
>> Network devices using kernel driver
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if= =3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed'
>> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if= =3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed'
>> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci
>>
>> The interfaces are up:
>>
>> pcgwpod009-c04:~ # ibdev2netdev -v
>> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up)
>> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up)
>> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up)
>> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up)
>> 0000:d8:00.2 mlx5_4 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f2 (Up)
>> 0000:d8:00.3 mlx5_5 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f3 (Up)
>> 0000:d8:00.4 mlx5_6 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f4 (Up)
>> 0000:d8:00.5 mlx5_7 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f5 (Up) pcgwpod009-c04:~ #
>>
>>
>> But when I run my application the Mellanox adapters are probed and= I
>obtain the following error:
>>
>> {"proc_id":"6"},"message":"[pio= ] EAL: Probe PCI driver: mlx5_pci
>> (15b3:1014) device: 0000:d8:00.4 (socket 1)"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] net_mlx5: unable to= recognize
>> master/representors on the multiple IB devices"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] common_mlx5: Failed= to load driver =3D
>> net_mlx5."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Requested devi= ce 0000:d8:00.4 cannot be
>> used"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Bus (pci) prob= e failed."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] USER1: ports init f= ail in DPDK, expect 1 ports,
>> actual 0 ports."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"error","service_id":"eric-pc-up= -data-plane","metadata":{"proc_id"
>> :"6"},"message":"[pktio_libpio_init] No n= etwork ports could be
>> enabled!"}
>>
>> Could you please help me with this issue?
>>
>>
>> Thanks,
>>
>> Roc=EDo
>>
>
>
>
>

--_000_AM5PR0701MB2324C5DAA22F90E1878E595793299AM5PR0701MB2324_--