From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BA7B5A00C2 for ; Thu, 3 Feb 2022 11:49:37 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 59BD04014F; Thu, 3 Feb 2022 11:49:37 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2047.outbound.protection.outlook.com [40.107.236.47]) by mails.dpdk.org (Postfix) with ESMTP id 367F040143 for ; Thu, 3 Feb 2022 11:49:36 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HDNWlFmnvFL9QH0+z6hotpYAl9TrzcJEUmho8mz5wqsD35WQ0e+EFFq0F8ZItZbz29VUb9FO5yYVYmHVqoI17CvThEL6iAml8kGziA8QGYXgNebJWQNvwZq7JaoFMgv7GmVI5NYX+LE9Mp5dBYzTKY7dqEr7nif78BLIz8mQo9eXIktlCMsGjkLelatWEqM2UpmfmfvSuq1lEfZKHt7St9tFXfGk3m+43JQkblAk7MXxual1UOkDMcgtzLUH5AeLYX+vkUfRl8cwV92aai1jKSfEkz8Vl2zkNRzYVBpf7d4uN3/2gBRtdQbOroAnHQVhXuCwqOGbpzlcxAav6aToSg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nhI0O/3MkK5lmjYnXOgXIKe4RdgLj4B3y18RGBHXqrQ=; b=VXtfejKQolfpvc5OZLPsOTf+uIMBgGdAgye+M9Sh8Iat5Z2O2h4SX1TMMaADXfrOS+IlEFWNJSf+7nvTa19iWa/ys0oSsADOQfYyt6SHf2ctunsfdVAGg5rzLggZgPAHTJ+K9V2NQIpEVOdO+X9HzlX8YYN5NTAXsjyq8WyomsYkXzbaAHS3/IaZSac3oqtvX7Z3l2AyLDsIXZgql7ROOGIcg6JLak5sJV2Nu/HJh2cX/U3jEEior28LouKvtjgxTuBAwYlvWOegFNfW5kXunZp5OOPzTcRQvovJ3ZNXXOlxzfO9T5YliE9XmgwqRhyJMLFjRP+Gx5/fYiVjF9Dg+g== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=none; dmarc=none; dkim=none; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nhI0O/3MkK5lmjYnXOgXIKe4RdgLj4B3y18RGBHXqrQ=; b=CVLE52TOu5ddnRFmwkeTwv0+kjyGushcyp+bVr4YiK9ULFtCfAfMw6m/FeDloEZFy7uNgAXojFkaQzOkgmR0j6n2LlqSUr4B9BLKqjmZyBF+XAFMz+x1rgHll8c7ZVZRaniHMOTqURau4fDlHJDnw6e08Tw50TMHh8ePVdWvumCsXhiOoYWvAOUZXAjMiRAHx7WuT6amVJwMkPBZLTrI/S7wokfuPr+zIw81OU85pmIZsWWzRYOf0JzVAB9N13PtnjE/YyD2C08YFv+l6BcrY91bRIaCa8LQT7Fl1iWXUSbE6WAbatnGchPU48J6zd5CYhLXMboDIN4wOe9hmaaBwg== Received: from DM5PR1201MB2555.namprd12.prod.outlook.com (2603:10b6:3:ea::14) by DM6PR12MB3980.namprd12.prod.outlook.com (2603:10b6:5:1cc::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4951.12; Thu, 3 Feb 2022 10:49:33 +0000 Received: from DM5PR1201MB2555.namprd12.prod.outlook.com ([fe80::e4c8:ad94:bd2f:3fdd]) by DM5PR1201MB2555.namprd12.prod.outlook.com ([fe80::e4c8:ad94:bd2f:3fdd%5]) with mapi id 15.20.4951.014; Thu, 3 Feb 2022 10:49:33 +0000 From: Asaf Penso To: Rocio Dominguez , "NBU-Contact-Thomas Monjalon (EXTERNAL)" , Ferruh Yigit , Qi Zhang CC: "users@dpdk.org" , Matan Azrad , Slava Ovsiienko , Raslan Darawsheh Subject: RE: net_mlx5: unable to recognize master/representors on the multiple IB devices Thread-Topic: net_mlx5: unable to recognize master/representors on the multiple IB devices Thread-Index: AdgJLQTmyQJM7lgBQqSyceNdxLJR4QBr0gyAAARnZ6AAI+HbQACUBFcNABidwCACGgPv4AAAJe0wAArmI28AiMSY4AABCE1Q Date: Thu, 3 Feb 2022 10:49:33 +0000 Message-ID: References: <37327910.J2Yia2DhmK@thomas> In-Reply-To: Accept-Language: en-US Content-Language: en-US X-Mentions: ferruh.yigit@intel.com,qi.z.zhang@intel.com X-MS-Has-Attach: X-MS-TNEF-Correlator: authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nvidia.com; x-ms-publictraffictype: Email x-ms-office365-filtering-correlation-id: 172fdb7b-9251-44c9-7035-08d9e702dd56 x-ms-traffictypediagnostic: DM6PR12MB3980:EE_ x-ld-processed: 43083d15-7273-40c1-b7db-39efd9ccc17a,ExtAddr x-microsoft-antispam-prvs: x-ms-oob-tlc-oobclassifiers: OLM:2089; x-ms-exchange-senderadcheck: 1 x-ms-exchange-antispam-relay: 0 x-microsoft-antispam: BCL:0; x-microsoft-antispam-message-info: CC71inPRAsQUdqIUYYc3WYF49wqmONxufePrqzzH7EZxHWheooLIi0qLjXn0B1enoQ+ShGOdr38R53e9mYy7F6ZUBYlxtVo/ykLi0GLai5DHnZmLrbBbhuWhHLZmWmX5AkzUVeKcxbYW5iQNNESVoJt1VQAxzzY6qXJpJO7yBWaQGTSiNsTMBenI35VtmCLZnyS6OZh5q8F6dVIwDcivduiSUqVPB6cTkM4/Vxoga4aswSPK+YdObZ6/6r9nnSeNPaMow7EwFqIm0ntTl9zXjltd8qEv91NROZRlP6AG5nEniL0V2TABH4k/ZBwq2MnlIp91qpRqR+lu4My2SsdWEegF3ckrhSlUzrHncKXojokf/742gba9mXf5rSahjeSG7Zdngt1Hg7Q+vqemA35womO7xo/7gaOdhs4q/+Gr/ePmLakstDq4FHeJu9HzmxgECsrySHfR0LTMO6CK+xOIDnAzB4IRKUnof5gm/zWwC/uLhtS4kPUdwZj5joUcC1meGzgXZtHJjntAG2jg7lm2tL/RwpWv+/aj7jmhuCo0NIaKBJGgrty28q1VKMpbLDT+3dhcRK6yOT7ObhX665K8qYBk/F/QCOPbi5SSz1f+HzoWec1JU5SzQj24JXvWpsTiMjzgKsmVybJfvUFyZWYn6YMnWn5WYdBpFOsbBFKqNNPzGvnPpvXlDnqrBlXHX1PjerwUyPQkh9wxXmFfLy7Tutkf6Tvynb1J0rzhGT/ioYHNmN6e2eYI2oWh0Phjm8tGqijuw/K+glfFU91hb4GQ1FHbDYVGNCN2Setik46zx0E= x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DM5PR1201MB2555.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230001)(4636009)(366004)(66556008)(110136005)(64756008)(55016003)(508600001)(7696005)(9686003)(66476007)(66446008)(53546011)(966005)(71200400001)(26005)(186003)(107886003)(33656002)(76116006)(66946007)(66574015)(83380400001)(4326008)(8936002)(8676002)(2906002)(52536014)(9326002)(5660300002)(122000001)(6506007)(316002)(54906003)(86362001)(38100700002)(38070700005)(166002)(30864003)(559001)(579004); DIR:OUT; SFP:1101; x-ms-exchange-antispam-messagedata-chunkcount: 1 x-ms-exchange-antispam-messagedata-0: =?iso-8859-1?Q?dvbOMYYmWnqOnAAm6LXCMnE6LndBkXp0cMz7oBCtfWpBmKF2s+lOHQaTuf?= =?iso-8859-1?Q?6Zd2sQ/N+YVQlQxzD5fYMorOMcdM2mfdROJg4YuKxfN29BCQd778R2CnFr?= =?iso-8859-1?Q?ZUmUJCRYGzQ+PktPEYJZONSlu4gUIfBx4P/7BV18p+Yf4t5yHS9XIYTp0x?= =?iso-8859-1?Q?tI3QtAqtdHaJ/K2+zB3+d0lam4XZYD7/M63fxzrc0pN1WVFPKqNs6ENgYc?= =?iso-8859-1?Q?mj8Xr3Wh4N/WjX1UPEvvo7yc58LniKq5HpnBZ11uf2o0vuKx0sDmlzu7bu?= =?iso-8859-1?Q?UfHnjTVhzDvsVRGZvkqK110vpnkFh06eRB9gTmU43eH2X5zOU0YXMZzukh?= =?iso-8859-1?Q?gi9pQXdsJ5MZZEC4uDo4cTGA+oSlU7MRmmOVQnyETUwgledf4dx/8cV41G?= =?iso-8859-1?Q?jDwebNFsXBlvHzMe7zC96ciL3mlQpMcAdng7Kr1c1ENoYWA8IOvC8Ktu9x?= =?iso-8859-1?Q?YJueB9cbnscoRWEgJu0z3qV8jnXr9BGKjghgd1mRorfLbnMg0W+j6YPUQL?= =?iso-8859-1?Q?BN0TIn0edajdPYCd68bbCTkpBkxrAIHzFnSl2dor+dU+WWJ53u9whmmjjx?= =?iso-8859-1?Q?dYMdW7bBWO164UZXkD2aqcF4tZVgFoSd822V7lhNKl0xpGBA5cXArG7G93?= =?iso-8859-1?Q?oFiBp2+P6ylegvEpfBuVTQ7pJ++5bhXA0f7/7PVuUROpbyPQcgpuYZoS7N?= =?iso-8859-1?Q?18kI1Z3HVyYXzwP0gT9gk7FlYvp6uu7lIGY0yPAXZ4k4MQmmZlXSkXYPRg?= =?iso-8859-1?Q?idCqQUx/0o9SG+xy/3pVFgdR3M8g4E3Ds8qql1gcJBzFs93aiKSIannHk/?= =?iso-8859-1?Q?BAUX0fMxQdzOxHScQkGt5ZPr5tiyuIlL8kbpqBEKENkpjcEeVYVEp8gScq?= =?iso-8859-1?Q?fZI/oWyqxtciBUFJLpkg8wDH2cw4FmHrscZP8EpVTKAGSKZRS3vUunKo9c?= =?iso-8859-1?Q?nE40gwyM1ab+CLwlgK6J+31eOX4nh9ZDHEBKgKpsB05T7mpEQ7dcRy3EET?= =?iso-8859-1?Q?FmCdmbfgLiGZTIcMLyMHtnRuLhM8qDfCsb92XLjohOvqY6qp+5+3xNYksk?= =?iso-8859-1?Q?qzHzvsKj2tZQeE94FGovdWy++ljERaczhGZiENP2x1YI8eX1XAOaWhBW3i?= =?iso-8859-1?Q?jwS79YPUy9nmCweCRAKhFIwnSlc0jeG7Y9qtyxQUuB4Mi2K/ZQKvCDF8Ee?= =?iso-8859-1?Q?Q0BXMh3eQ0Ah17ZSWAG4lsb03YJuyT+JDzI0z3udy8yV4eM9oWoU2GJV4M?= =?iso-8859-1?Q?W+x60EN0sbXEHqQYImtR4dIbEFXJDb563xeRVEHLfPxe+NEe5Fu4OqRycU?= =?iso-8859-1?Q?MdsW7LnfivFx41PP3hpoUe/T17gRlMawxlELpgaIWf+RbZcuL/PI10IFpj?= =?iso-8859-1?Q?fxhJdPCWxpSy0zDa+Rd1+NE45YDPgcqS/M3GohSohh0Cb7wBZiIGN7OItS?= =?iso-8859-1?Q?GB8HaWMII+zxS342Rxno+4YEM7UzKAg+N6cuWzZaaxstlqhsk6SnsLGFS1?= =?iso-8859-1?Q?T17M7QXtQb0kRWtYHVxUyb8YU70l0FxJfCAKtIKJ3DdoWvpvLU1QmDU6Mp?= =?iso-8859-1?Q?+Gf4cYeAjy511n5byJGEFmLtpkNnJns4uVo5nanvlRh8xzGy8qVZCUGjFK?= =?iso-8859-1?Q?jaIL2HCZGMjk1B07PAqnbDyfasxJciSfPjVRXQ+2/ivAqUdCVSsYOByZ6t?= =?iso-8859-1?Q?mGi4rgKxpQUZvhNpRH4=3D?= Content-Type: multipart/alternative; boundary="_000_DM5PR1201MB2555530FB2C86D96DF5C9C29CD289DM5PR1201MB2555_" MIME-Version: 1.0 X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-AuthSource: DM5PR1201MB2555.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-Network-Message-Id: 172fdb7b-9251-44c9-7035-08d9e702dd56 X-MS-Exchange-CrossTenant-originalarrivaltime: 03 Feb 2022 10:49:33.8078 (UTC) X-MS-Exchange-CrossTenant-fromentityheader: Hosted X-MS-Exchange-CrossTenant-id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-mailboxtype: HOSTED X-MS-Exchange-CrossTenant-userprincipalname: DerDCeo1agaZ1q0T4BxPaEEhCyxKoclEkjrNjghZ8tvCxC5UEP6v94pTUTL0k2w2efbx/X8pVxJQFwl2qT8Eyg== X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB3980 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --_000_DM5PR1201MB2555530FB2C86D96DF5C9C29CD289DM5PR1201MB2555_ Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Hello Rocio, For Intel's NIC it would be better to take it with @Ferruh Yigit/@Qi Zhang For Nvidia's let's continue together. Regards, Asaf Penso From: Rocio Dominguez Sent: Thursday, February 3, 2022 12:30 PM To: Asaf Penso ; NBU-Contact-Thomas Monjalon (EXTERNAL) <= thomas@monjalon.net> Cc: users@dpdk.org; Matan Azrad ; Slava Ovsiienko ; Raslan Darawsheh Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, We have replaced the Mellanox NICs by Intel NICs trying to avoid this probl= em, but it's not working also, this time with the following error: {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"8"},"mess= age":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_IN= TEL_COM_INTEL_SRIOV_DPDK=3D0000:d8:02.1 found"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for = DPDK"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --f= ile-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d= 8:02.1 --base-virtaddr=3D0x200000000 --legacy-mem --no-shconf "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Detected 96 lcore(s)"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Detected 2 NUMA nodes"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Selected IOVA mode 'VA'"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hu= getlbfs found for that size"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Probing VFIO support..."} {"version":"0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: VFIO support initialized"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: using IOMMU type 1 (Type 1)"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Probe PCI driver: net_i40e_vf (8086:154c) device: 0000:d8:= 02.1 (socket 1)"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Releasing pci mapped resource for 0000:d8:02.1"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40000000= "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Calling pci_unmap_resource for 0000:d8:02.1 at 0xa40010000= "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Requested device 0000:d8:02.1 cannot be used"} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] EAL: Bus (pci) probe failed."} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports.= "} {"version":"0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00","severity":"= error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mes= sage":"[pktio_libpio_init] No network ports could be enabled!"} As using Intel NICs, now I have create the VFs and bind them to vfio-pci dr= iver pcgwpod009-c04:~ # dpdk-devbind --status N Network devices using DPDK-compatible driver =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 0000:d8:02.0 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf 0000:d8:02.1 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf 0000:d8:02.2 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf 0000:d8:02.3 'Ethernet Virtual Function 700 Series 154c' drv=3Dvfio-pci unu= sed=3Diavf Network devices using kernel driver =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe unus= ed=3Dvfio-pci 0000:3b:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp1p1 dr= v=3Di40e unused=3Dvfio-pci 0000:3b:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp1p2 dr= v=3Di40e unused=3Dvfio-pci 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p= 1 drv=3Dixgbe unused=3Dvfio-pci 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p= 2 drv=3Dixgbe unused=3Dvfio-pci 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3Dp3p1_0 = drv=3Dixgbevf unused=3Dvfio-pci 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' if=3Dp3p1_1 = drv=3Dixgbevf unused=3Dvfio-pci 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D drv=3D= ixgbevf unused=3Dvfio-pci 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' if=3D drv=3D= ixgbevf unused=3Dvfio-pci 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p= 1 drv=3Dixgbe unused=3Dvfio-pci 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p= 2 drv=3Dixgbe unused=3Dvfio-pci 0000:d8:00.0 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp8p1 dr= v=3Di40e unused=3Dvfio-pci 0000:d8:00.1 'Ethernet Controller XXV710 for 25GbE SFP28 158b' if=3Dp8p2 dr= v=3Di40e unused=3Dvfio-pci The interfaces are up: pcgwpod009-c04:~ # ip link show dev p8p1 290: p8p1: mtu 1500 qdisc mq state UP mod= e DEFAULT group default qlen 1000 link/ether 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff vf 0 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off vf 1 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off vf 2 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off vf 3 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, spoof chec= king on, link-state auto, trust off pcgwpod009-c04:~ # The testpmd is working: pcgwpod009-c04:~ # testpmd -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2 -w= d8:02.3 -- --rxq=3D2 --txq=3D2 -i EAL: Detected 96 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs foun= d for that size EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:d8:02.0 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: using IOMMU type 1 (Type 1) EAL: PCI device 0000:d8:02.1 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: PCI device 0000:d8:02.2 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf EAL: PCI device 0000:d8:02.3 on NUMA socket 1 EAL: probe driver: 8086:154c net_i40e_vf Interactive-mode selected testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 1) Port 0: FE:72:DB:BE:05:EF Configuring Port 1 (socket 1) Port 1: 5E:C5:3E:86:1A:84 Configuring Port 2 (socket 1) Port 2: 42:F0:5D:B0:1F:B3 Configuring Port 3 (socket 1) Port 3: 46:00:42:2F:A2:DE Checking link statuses... Done testpmd> Any idea on what could be causing the error this time? Thanks, Roc=EDo From: Asaf Penso > Sent: Monday, January 31, 2022 6:02 PM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: Re: net_mlx5: unable to recognize master/representors on the multi= ple IB devices We'll need to check, but how do you want to proceed? You either need 19.11 LTS or 20.11 LTS to work properly. Regards, Asaf Penso ________________________________ From: Rocio Dominguez > Sent: Monday, January 31, 2022 2:01:43 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org >; Matan Azrad >; Slava Ovsi= ienko >; Raslan Daraw= sheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, Yes, it seems that DPDK version 20.08 code is built-in with the VNF I'm dep= loying, so it is always using this version, which apparently doesn't have t= he patch that overrides this error. I think the patch is the following: https://patches.dpdk.org/project/dpdk/patch/20200603150602.4686-7-ophirmu@m= ellanox.com/ and the code part that solves the error is: + if (mlx5_class_get(pci_dev->device.devargs) !=3D MLX5_CLASS_NET) { + DRV_LOG(DEBUG, "Skip probing - should be probed by other m= lx5" + " driver."); + return 1; + } Could you please confirm? Thanks, Roc=EDo From: Asaf Penso > Sent: Monday, January 31, 2022 12:49 PM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices I see two differences below. First, in testpmd the version is 19.11.11, and in your application, it's 20= .08. See this print: {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} Second, in your application, I see the VFIO driver is not started properly: 20T19:19:16.637+00:00","severity":"info","service_id":"eric-pc-up-data-plan= e","metadata":{"proc_id":"6"},"message":"[pio] EAL: cannot open VFIO cont= ainer, error 2 (No such file or directory)"} Regards, Asaf Penso From: Rocio Dominguez > Sent: Thursday, January 20, 2022 9:49 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, I have manually compile and install the DPDK 19.11.11. Executing testpmd in the Mellanox NICs VFs where I want to run my app gives= this result: pcgwpod009-c04:~/dpdk-stable-19.11.11 # ./x86_64-native-linux-gcc/app/testp= md -l 8-15 -n 4 -w d8:00.2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=3D2 --= txq=3D2 -i EAL: Detected 96 lcore(s) EAL: Detected 2 NUMA nodes EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: 2048 hugepages of size 2097152 reserved, but no mounted hugetlbfs foun= d for that size EAL: Probing VFIO support... EAL: VFIO support initialized EAL: PCI device 0000:d8:00.2 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.3 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.4 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 EAL: PCI device 0000:d8:00.5 on NUMA socket 1 EAL: probe driver: 15b3:1014 net_mlx5 Interactive-mode selected testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D203456, size=3D21= 76, socket=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0 (socket 1) Port 0: 36:FE:F0:D2:90:27 Configuring Port 1 (socket 1) Port 1: 72:AC:33:BF:0A:FA Configuring Port 2 (socket 1) Port 2: 1E:8D:81:60:43:E0 Configuring Port 3 (socket 1) Port 3: C2:3C:EA:94:06:B4 Checking link statuses... Done testpmd> But when I run my Data Plane app, the result is {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[pktio_linux_packet_mmap_setup] block_size: 67108864, frame_size: 409= 6, block_nr: 1, frame_nr: 16384, mem_size: 67108864"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_libpio_init] CTRL: pci devices added: 1, vhost user devices ad= ded: 0"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"7"},"mess= age":"[add_pio_pci_devices_from_env_to_config] pci device from PCIDEVICE_ME= LLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_libpio_init] CTRL: requesting 1024 MiB of hugepage memory for = DPDK"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: DPDK version: DPDK 20.08.0"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: rte_eal_init() args: pio -m 1024 -n 4 --no-telemetry --f= ile-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 0000:d= 8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no-shc= onf "} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Detected 96 lcore(s)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Detected 2 NUMA nodes"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Selected IOVA mode 'VA'"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: 2048 hugepages of size 2097152 reserved, but no mounted hu= getlbfs found for that size"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Probing VFIO support..."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: cannot open VFIO container, error 2 (No such file or dir= ectory)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: VFIO support could not be initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3:1014) device: 0000:d8:00.= 5 (socket 1)"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] net_mlx5: unable to recognize master/representors on the multip= le IB devices"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] common_mlx5: Failed to load driver =3D net_mlx5."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Requested device 0000:d8:00.5 cannot be used"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] EAL: Bus (pci) probe failed."} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pio] USER1: ports init fail in DPDK, expect 1 ports, actual 0 ports.= "} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mes= sage":"[pktio_libpio_init] No network ports could be enabled!"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] libpio packet module is NOT initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] pktsock packet module is NOT initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] linux packet module is initialized"} {"version":"0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00","severity":"= info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id":"6"},"mess= age":"[pktio_init_cpu] tap packet module is NOT initialized"} Any idea on what could be the problem? Thanks, Roc=EDo From: Asaf Penso > Sent: Thursday, January 20, 2022 8:17 AM To: Rocio Dominguez >; NBU-Contact-Thomas Monjalon (EXTERNAL) > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: Re: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Although inbox drivers come with a pre installed DPDK, you can manually dow= nload, compile, install, and work with whatever version you wish. Let us know the results, and we'll continue from there. Regards, Asaf Penso ________________________________ From: Rocio Dominguez > Sent: Monday, January 17, 2022 10:20:58 PM To: Asaf Penso >; NBU-Contact-Tho= mas Monjalon (EXTERNAL) > Cc: users@dpdk.org >; Matan Azrad >; Slava Ovsi= ienko >; Raslan Daraw= sheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hi Asaf, Thanks for the prompt answer. I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10. I have installed it but the problem persists. It's probably solved in 19.11= .11. There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway. Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated. Thanks, Roc=EDo -----Original Message----- From: Asaf Penso > Sent: Sunday, January 16, 2022 4:31 PM To: NBU-Contact-Thomas Monjalon (EXTERNAL) >; Rocio Dominguez > Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >; Raslan Darawsheh > Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices Hello Rocio, IIRC, there was a fix in a recent stable version. Would you please try taking latest 19.11 LTS and tell whether you still see= the issue? Regards, Asaf Penso >-----Original Message----- >From: Thomas Monjalon > >Sent: Sunday, January 16, 2022 3:24 PM >To: Rocio Dominguez > >Cc: users@dpdk.org; Matan Azrad >; Slava Ovsiienko >>; Raslan Darawsheh = > >Subject: Re: net_mlx5: unable to recognize master/representors on the >multiple IB devices > >+Cc mlx5 experts > > >14/01/2022 11:10, Rocio Dominguez: >> Hi, >> >> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs. >> >> I'm using: >> >> OS SLES 15 SP2 >> DPDK 19.11.4 (the official supported version for SLES 15 SP2) >> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one) >> Mellanox adapters firmware 12.28.2006 (corresponding to this >> MLNX_OFED version) kernel 5.3.18-24.34-default >> >> >> This is my SRIOV configuration for DPDK capable PCI slots: >> >> { >> "resourceName": "mlnx_sriov_netdevice", >> "resourcePrefix": "mellanox.com", >> "isRdma": true, >> "selectors": { >> "vendors": ["15b3"], >> "devices": ["1014"], >> "drivers": ["mlx5_core"], >> "pciAddresses": ["0000:d8:00.2", "0000:d8:00.3", >> "0000:d8:00.4", >"0000:d8:00.5"], >> "isRdma": true >> } >> >> The sriov device plugin starts without problems, the devices are >> correctly >allocated: >> >> { >> "cpu": "92", >> "ephemeral-storage": "419533922385", >> "hugepages-1Gi": "8Gi", >> "hugepages-2Mi": "4Gi", >> "intel.com/intel_sriov_dpdk": "0", >> "intel.com/sriov_cre": "3", >> "mellanox.com/mlnx_sriov_netdevice": "4", >> "mellanox.com/sriov_dp": "0", >> "memory": "183870336Ki", >> "pods": "110" >> } >> >> The Mellanox NICs are binded to the kernel driver mlx5_core: >> >> pcgwpod009-c04:~ # dpdk-devbind --status >> >> Network devices using kernel driver >> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D >> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Dixgbe >> unused=3Dvfio-pci >> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci >> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci >> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if=3D >> drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed' >> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if=3D >> drv=3Dixgbevf unused=3Dvfio-pci >> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed' >> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci >> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci >> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb' >> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci >> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci >> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci >> >> The interfaces are up: >> >> pcgwpod009-c04:~ # ibdev2netdev -v >> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up) >> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up) >> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up) >> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28 >> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up) >> 0000:d8:00.2 mlx5_4 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f2 (Up) >> 0000:d8:00.3 mlx5_5 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f3 (Up) >> 0000:d8:00.4 mlx5_6 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f4 (Up) >> 0000:d8:00.5 mlx5_7 (MT4116 - NA) fw 12.28.2006 port 1 (ACTIVE) =3D=3D> >> enp216s0f5 (Up) pcgwpod009-c04:~ # >> >> >> But when I run my application the Mellanox adapters are probed and I >obtain the following error: >> >> {"proc_id":"6"},"message":"[pio] EAL: Probe PCI driver: mlx5_pci >> (15b3:1014) device: 0000:d8:00.4 (socket 1)"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] net_mlx5: unable to recognize >> master/representors on the multiple IB devices"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] common_mlx5: Failed to load driver =3D >> net_mlx5."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] EAL: Requested device 0000:d8:00.4 cannot be >> used"} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.826+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] EAL: Bus (pci) probe failed."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever >> i >> ty":"info","service_id":"eric-pc-up-data-plane","metadata":{"proc_id": >> "6"},"message":"[pio] USER1: ports init fail in DPDK, expect 1 ports, >> actual 0 ports."} >> {"version":"0.2.0","timestamp":"2022-01-14T09:51:39.890+00:00","sever >> i >> ty":"error","service_id":"eric-pc-up-data-plane","metadata":{"proc_id" >> :"6"},"message":"[pktio_libpio_init] No network ports could be >> enabled!"} >> >> Could you please help me with this issue? >> >> >> Thanks, >> >> Roc=EDo >> > > > > --_000_DM5PR1201MB2555530FB2C86D96DF5C9C29CD289DM5PR1201MB2555_ Content-Type: text/html; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable

Hello Rocio,

 

For Intel’s NIC it would be better to take it = with @Ferruh Yigit/@Qi Zhang

For Nvidia’s let’s continue together.

 

Regards,

Asaf Penso

 

From: Rocio Dominguez <rocio.dominguez@eri= csson.com>
Sent: Thursday, February 3, 2022 12:30 PM
To: Asaf Penso <asafp@nvidia.com>; NBU-Contact-Thomas Monjalon= (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azrad <matan@nvidia.com>; Slava Ovsi= ienko <viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.c= om>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

We have replaced the Mellanox NICs by Intel NICs try= ing to avoid this problem, but it’s not working also, this time with = the following error:

 

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.377+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"8&qu= ot;},"message":"[add_pio_pci_devices_from_env_to_config] pci= device from PCIDEVICE_INTEL_COM_INTEL_SRIOV_DPDK=3D0000:d8:02.1 found"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pktio_libpio_init] CTRL: requesting 1024 Mi= B of hugepage memory for DPDK"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0"= ;}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.378+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] USER1: rte_eal_init() args: pio -m 102= 4 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:02.1 --base-virtaddr=3D0x200000000 --legacy-mem --no-shconf "}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Detected 96 lcore(s)"}<= /o:p>

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.384+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Detected 2 NUMA nodes"}=

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.386+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: 2048 hugepages of size 2097152 re= served, but no mounted hugetlbfs found for that size"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Probing VFIO support..."}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:37.387+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: VFIO support initialized"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.358+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL:   using IOMMU type 1 (T= ype 1)"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Probe PCI driver: net_i40e_vf = (8086:154c) device: 0000:d8:02.1 (socket 1)"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Releasing pci mapped resource = for 0000:d8:02.1"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Calling pci_unmap_resource for= 0000:d8:02.1 at 0xa40000000"}

{"version":&= quot;0.2.0","timestamp":"2022-02-02T14:43:38.704+00:00&= quot;,"severity":"info","service_id":"er= ic-pc-up-data-plane","metadata":{"proc_id":"7= "},"message":"[pio] EAL: Calling pci_unmap_resource for= 0000:d8:02.1 at 0xa40010000"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Requested device 0000:d8:02.1 can= not be used"}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.828+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] EAL: Bus (pci) probe failed."}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00&quo= t;,"severity":"info","service_id":"eric-= pc-up-data-plane","metadata":{"proc_id":"7&qu= ot;},"message":"[pio] USER1: ports init fail in DPDK, expect= 1 ports, actual 0 ports."}

{"version":&quo= t;0.2.0","timestamp":"2022-02-02T14:43:38.891+00:00&quo= t;,"severity":"error","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"7&q= uot;},"message":"[pktio_libpio_init] No network ports could = be enabled!"}

 

As using Intel NICs, now I have create the VFs and b= ind them to vfio-pci driver

 

pcgwpod009-c04:~ # dpdk-d= evbind --status

 

N Network devices using D= PDK-compatible driver

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D

0000:d8:02.0 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.1 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.2 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

0000:d8:02.3 'Ethernet Vi= rtual Function 700 Series 154c' drv=3Dvfio-pci unused=3Diavf

 

Network devices using ker= nel driver

=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D

0000:18:00.0 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem1 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:18:00.1 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem2 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:19:00.0 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem3 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:19:00.1 'Ethernet Co= ntroller 10G X550T 1563' if=3Dem4 drv=3Dixgbe unused=3Dvfio-pci<= /p>

0000:3b:00.0 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp1p1 drv=3Di40e unused=3Dvfio-pc= i

0000:3b:00.1 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp1p2 drv=3Di40e unused=3Dvfio-pc= i

0000:5e:00.0 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p1 drv=3Dixgbe unused=3Dvf= io-pci

0000:5e:00.1 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp3p2 drv=3Dixgbe unused=3Dvf= io-pci

0000:5e:10.0 '82599 Ether= net Controller Virtual Function 10ed' if=3Dp3p1_0 drv=3Dixgbevf unused=3Dvf= io-pci

0000:5e:10.2 '82599 Ether= net Controller Virtual Function 10ed' if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvf= io-pci

0000:5e:10.4 '82599 Ether= net Controller Virtual Function 10ed' if=3D drv=3Dixgbevf unused=3Dvfio-pci=

0000:5e:10.6 '82599 Ether= net Controller Virtual Function 10ed' if=3D drv=3Dixgbevf unused=3Dvfio-pci=

0000:af:00.0 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p1 drv=3Dixgbe unused=3Dvf= io-pci

0000:af:00.1 '82599ES 10-= Gigabit SFI/SFP+ Network Connection 10fb' if=3Dp4p2 drv=3Dixgbe unused=3Dvf= io-pci

0000:d8:00.0 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp8p1 drv=3Di40e unused=3Dvfio-pc= i

0000:d8:00.1 'Ethernet Co= ntroller XXV710 for 25GbE SFP28 158b' if=3Dp8p2 drv=3Di40e unused=3Dvfio-pc= i

 

The interfaces are up:

 

pcgwpod009-c04:~ # ip lin= k show dev p8p1

290: p8p1: <BROADCAST,= MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group def= ault qlen 1000

    link/e= ther 40:a6:b7:0d:98:b0 brd ff:ff:ff:ff:ff:ff

    vf 0&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

    vf 1&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

    vf 2&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

    vf 3&n= bsp;    link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff, = spoof checking on, link-state auto, trust off

pcgwpod009-c04:~ #

 

The testpmd is working:

 

pcgwpod009-c04:~ # testpm= d -l 8-15 -n 4 -w d8:02.0 -w d8:02.1 -w d8:02.2 -w d8:02.3 -- --rxq=3D2 --t= xq=3D2 -i

EAL: Detected 96 lcore(s)=

EAL: Detected 2 NUMA node= s

EAL: Multi-process socket= /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode '= VA'

EAL: 2048 hugepages of si= ze 2097152 reserved, but no mounted hugetlbfs found for that size

EAL: Probing VFIO support= ...

EAL: VFIO support initial= ized

EAL: PCI device 0000:d8:0= 2.0 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

EAL:   using IO= MMU type 1 (Type 1)

EAL: PCI device 0000:d8:0= 2.1 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

EAL: PCI device 0000:d8:0= 2.2 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

EAL: PCI device 0000:d8:0= 2.3 on NUMA socket 1

EAL:   probe dr= iver: 8086:154c net_i40e_vf

Interactive-mode selected=

testpmd: create a new mbu= f pool <mbuf_pool_socket_0>: n=3D203456, size=3D2176, socket=3D0=

testpmd: preferred mempoo= l ops selected: ring_mp_mc

testpmd: create a new mbu= f pool <mbuf_pool_socket_1>: n=3D203456, size=3D2176, socket=3D1=

testpmd: preferred mempoo= l ops selected: ring_mp_mc

Configuring Port 0 (socke= t 1)

Port 0: FE:72:DB:BE:05:EF=

Configuring Port 1 (socke= t 1)

Port 1: 5E:C5:3E:86:1A:84=

Configuring Port 2 (socke= t 1)

Port 2: 42:F0:5D:B0:1F:B3=

Configuring Port 3 (socke= t 1)

Port 3: 46:00:42:2F:A2:DE=

Checking link statuses...=

Done

testpmd>

 

Any idea on what could be causing the error this tim= e?

 

Thanks,

 

Roc=EDo

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 6:02 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

We'll need to check, but how do you want to proceed?=

You either need 19.11 LTS or 20.11 LTS to work prope= rly.

 

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 31, 2022 2:01:43 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <= viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.= com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

Yes, it seems that DPDK version 20.08 code is built= -in with the VNF I’m deploying, so it is always using this version, w= hich apparently doesn’t have the patch that overrides this error.

 

I think the patch is the following:

https://patches.dpdk.org/project/dpdk/patch/2020060= 3150602.4686-7-ophirmu@mellanox.com/

 

and the code part that solves the error is:

+ = ;      if (mlx5_class_get(pci_dev->device.devar= gs) !=3D MLX5_CLASS_NET) {

+ = ;            &n= bsp;  DRV_LOG(DEBUG, "Skip probing - should be probed by other ml= x5"

+ = ;            &n= bsp;          " driver.&q= uot;);

+ = ;            &n= bsp;  return 1;

+ = ;      }

Could you please confirm?

 

Thanks,

 

Roc=EDo

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Monday, January 31, 2022 12:49 PM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

I see two differences below.

First, in testpmd the version is 19.11.11, and in y= our application, it’s 20.08. See this print:

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0&quo= t;}

 

Second, in your application, I see the VFIO driver = is not started properly:

20T19:19:16.637+00:00","severity":&q= uot;info","service_id":"eric-pc-up-data-plane",&qu= ot;metadata":{"proc_id":"6"},"message":&= quot;[pio] EAL:   cannot open VFIO container, error 2 (No such fi= le or directory)"}

 

Regards,

Asaf Penso

 

From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Thursday, January 20, 2022 9:49 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

 

I have manually compile and install the DPDK 19.11.= 11.

 

Executing testpmd in the Mellanox NICs VFs where I = want to run my app gives this result:

 

pcgwpod009-c04:~/dpdk-st= able-19.11.11 # ./x86_64-native-linux-gcc/app/testpmd -l 8-15 -n 4 -w d8:00= .2 -w d8:00.3 -w d8:00.4 -w d8:00.5 -- --rxq=3D2 --txq=3D2 -i

EAL: Detected 96 lcore(s= )

EAL: Detected 2 NUMA nod= es

EAL: Multi-process socke= t /var/run/dpdk/rte/mp_socket

EAL: Selected IOVA mode = 'VA'

EAL: 2048 hugepages of s= ize 2097152 reserved, but no mounted hugetlbfs found for that size

EAL: Probing VFIO suppor= t...

EAL: VFIO support initia= lized

EAL: PCI device 0000:d8:= 00.2 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:= 00.3 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:= 00.4 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

EAL: PCI device 0000:d8:= 00.5 on NUMA socket 1

EAL:   probe d= river: 15b3:1014 net_mlx5

Interactive-mode selecte= d

testpmd: create a new mb= uf pool <mbuf_pool_socket_0>: n=3D203456, size=3D2176, socket=3D0

testpmd: preferred mempo= ol ops selected: ring_mp_mc

testpmd: create a new mb= uf pool <mbuf_pool_socket_1>: n=3D203456, size=3D2176, socket=3D1

testpmd: preferred mempo= ol ops selected: ring_mp_mc

Configuring Port 0 (sock= et 1)

Port 0: 36:FE:F0:D2:90:2= 7

Configuring Port 1 (sock= et 1)

Port 1: 72:AC:33:BF:0A:F= A

Configuring Port 2 (sock= et 1)

Port 2: 1E:8D:81:60:43:E= 0

Configuring Port 3 (sock= et 1)

Port 3: C2:3C:EA:94:06:B= 4

Checking link statuses..= .

Done

testpmd>

 

But when I run my Data Plane app, the result is

 

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.609+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"7&q= uot;},"message":"[pktio_linux_packet_mmap_setup] block_size:= 67108864, frame_size: 4096, block_nr: 1, frame_nr: 16384, mem_size: 67108864"}<= o:p>

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_libpio_init] CTRL: pci devices added= : 1, vhost user devices added: 0"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"7&q= uot;},"message":"[add_pio_pci_devices_from_env_to_config] pc= i device from PCIDEVICE_MELLANOX_COM_MLNX_SRIOV_NETDEVICE=3D0000:d8:00.5 found"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_libpio_init] CTRL: requesting 1024 M= iB of hugepage memory for DPDK"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: DPDK version: DPDK 20.08.0&quo= t;}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.610+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: rte_eal_init() args: pio -m 10= 24 -n 4 --no-telemetry --file-prefix pio-0 --master-lcore=3D4 --lcores=3D4@(4) --pci-whitelist 00= 00:d8:00.5 --base-virtaddr=3D0x200000000 --iova-mode=3Dva --legacy-mem --no= -shconf "}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Detected 96 lcore(s)"}=

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.618+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Detected 2 NUMA nodes"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.636+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Selected IOVA mode 'VA'"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: 2048 hugepages of size 2097152 r= eserved, but no mounted hugetlbfs found for that size"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Probing VFIO support..."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL:   cannot open VFIO con= tainer, error 2 (No such file or directory)"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:16.637+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: VFIO support could not be initia= lized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.567+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Probe PCI driver: mlx5_pci (15b3= :1014) device: 0000:d8:00.5 (socket 1)"}

{"version":= "0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00= ","severity":"info","service_id":"e= ric-pc-up-data-plane","metadata":{"proc_id":"= 6"},"message":"[pio] net_mlx5: unable to recognize mast= er/representors on the multiple IB devices"}

{"version":= "0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00= ","severity":"info","service_id":"e= ric-pc-up-data-plane","metadata":{"proc_id":"= 6"},"message":"[pio] common_mlx5: Failed to load driver= =3D net_mlx5."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Requested device 0000:d8:00.5 ca= nnot be used"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.569+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] EAL: Bus (pci) probe failed."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pio] USER1: ports init fail in DPDK, expec= t 1 ports, actual 0 ports."}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"error","service_id":"eri= c-pc-up-data-plane","metadata":{"proc_id":"6&= quot;},"message":"[pktio_libpio_init] No network ports could= be enabled!"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] libpio packet module is NO= T initialized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] pktsock packet module is N= OT initialized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] linux packet module is ini= tialized"}

{"version":&qu= ot;0.2.0","timestamp":"2022-01-20T19:19:17.631+00:00&qu= ot;,"severity":"info","service_id":"eric= -pc-up-data-plane","metadata":{"proc_id":"6&q= uot;},"message":"[pktio_init_cpu] tap packet module is NOT i= nitialized"}

 

Any idea on what could be the problem?

 

Thanks,

 

Roc=EDo

 

 

From: Asaf Penso <asafp@nvidia.com>
Sent: Thursday, January 20, 2022 8:17 AM
To: Rocio Dominguez <rocio.dominguez@ericsson.com>; NBU-Contact-Thomas Monjalon (EXTE= RNAL) <thomas@monjalon.net>= ;
Cc: users@dpdk.org; Matan Azra= d <matan@nvidia.com>; Slava O= vsiienko <viacheslavo@nvidia.c= om>; Raslan Darawsheh <rasl= and@nvidia.com>
Subject: Re: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Although inbox drivers come with a pre installed DP= DK, you can manually download, compile, install, and work with whatever ver= sion you wish.

 

Let us know the results, and we'll continue from th= ere.

 

Regards,

Asaf Penso


From: Rocio Dominguez <rocio.dominguez@ericsson.com>
Sent: Monday, January 17, 2022 10:20:58 PM
To: Asaf Penso <asafp@nvidia.= com>; NBU-Contact-Thomas Monjalon (EXTERNAL) <thomas@monjalon.net>
Cc: users@dpdk.org <users@dpdk.org>; Matan Azrad <matan@nvidia.com>; Slava Ovsiienko <= viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@nvidia.= com>
Subject: RE: net_mlx5: unable to recognize master/representors on th= e multiple IB devices

 

Hi Asaf,

Thanks for the prompt answer.

I have checked that the latest 19.11 LTS is 19.11.11, but in OpenSUSE repos= itories the corresponding RPM package for SLES 15 SP2 is not available, the= latest one is DPDK 19.11.10.

I have installed it but the problem persists. It's probably solved in 19.11= .11.

There is a RPM package in SLES 15 SP3 for DPDK 20.11.3, which is LTS also, = not sure if it could be a problem to install it in SLES 15 SP2. I will try = it anyway.

Also I will try to find other way to load the 19.11.11 in SLES 15 SP2 apart= from using RPM or zipper, any suggestion is appreciated.

Thanks,

Roc=EDo

-----Original Message-----
From: Asaf Penso <asafp@nvidia.com>
Sent: Sunday, January 16, 2022 4:31 PM
To: NBU-Contact-Thomas Monjalon (EXTERNAL) <
thomas@monjalon.net>; Rocio Dominguez <rocio.dominguez@ericsson.com>
Cc: users@dpdk.org; Matan Azrad <<= a href=3D"mailto:matan@nvidia.com">matan@nvidia.com>; Slava Ovsiienk= o <viacheslavo@nvidia.com&= gt;; Raslan Darawsheh <rasland@nvi= dia.com>
Subject: RE: net_mlx5: unable to recognize master/representors on the multi= ple IB devices

Hello Rocio,
IIRC, there was a fix in a recent stable version.
Would you please try taking latest 19.11 LTS and tell whether you still see= the issue?

Regards,
Asaf Penso

>-----Original Message-----
>From: Thomas Monjalon <thomas= @monjalon.net>
>Sent: Sunday, January 16, 2022 3:24 PM
>To: Rocio Dominguez <rocio.dominguez@ericsson.com>
>Cc: users@dpdk.org; Matan Azrad &= lt;matan@nvidia.com>; Slava Ovsi= ienko
><viacheslavo@nvidia.com>; Raslan Darawsheh <rasland@n= vidia.com>
>Subject: Re: net_mlx5: unable to recognize master/representors on the <= br> >multiple IB devices
>
>+Cc mlx5 experts
>
>
>14/01/2022 11:10, Rocio Dominguez:
>> Hi,
>>
>> I'm doing a setup with Mellanox ConnectX-4 (MCX416A-CCA) NICs.
>>
>> I'm using:
>>
>> OS SLES 15 SP2
>> DPDK 19.11.4 (the official supported version for SLES 15 SP2)
>> MLNX_OFED_LINUX-5.5-1.0.3.2-sles15sp2-x86_64 (the latest one)
>> Mellanox adapters firmware 12.28.2006 (corresponding to this
>> MLNX_OFED version) kernel 5.3.18-24.34-default
>>
>>
>> This is my SRIOV configuration for DPDK capable PCI slots:
>>
>>           &= nbsp; {
>>           &= nbsp;     "resourceName": "mlnx_sriov_ne= tdevice",
>>           &= nbsp;     "resourcePrefix": "mellanox.co= m",
>>           &= nbsp;     "isRdma": true,
>>           &= nbsp;     "selectors": {
>>           &= nbsp;         "vendors": = ["15b3"],
>>           &= nbsp;         "devices": = ["1014"],
>>           &= nbsp;         "drivers": = ["mlx5_core"],
>>           &= nbsp;         "pciAddresses&qu= ot;: ["0000:d8:00.2", "0000:d8:00.3",
>> "0000:d8:00.4",
>"0000:d8:00.5"],
>>           &= nbsp;         "isRdma": t= rue
>>           &= nbsp;     }
>>
>> The sriov device plugin starts without problems, the devices are <= br> >> correctly
>allocated:
>>
>> {
>>   "cpu": "92",
>>   "ephemeral-storage": "419533922385"= ;,
>>   "hugepages-1Gi": "8Gi",
>>   "hugepages-2Mi": "4Gi",
>>   "intel.com/intel_sriov_dpdk": "0",=
>>   "intel.com/sriov_cre": "3",
>>   "mellanox.com/mlnx_sriov_netdevice": "4= ",
>>   "mellanox.com/sriov_dp": "0",
>>   "memory": "183870336Ki",
>>   "pods": "110"
>> }
>>
>> The Mellanox NICs are binded to the kernel driver mlx5_core:
>>
>> pcgwpod009-c04:~ # dpdk-devbind --status
>>
>> Network devices using kernel driver
>> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> 0000:18:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem1 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:18:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem2 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:19:00.0 'Ethernet Controller 10G X550T 1563' if=3Dem3 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:19:00.1 'Ethernet Controller 10G X550T 1563' if=3Dem4 drv=3Di= xgbe
>> unused=3Dvfio-pci
>> 0000:3b:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:3b:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp59s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:5e:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp3p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp3p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:5e:10.0 '82599 Ethernet Controller Virtual Function 10ed' if= =3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.2 '82599 Ethernet Controller Virtual Function 10ed'
>> if=3Dp3p1_1 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.4 '82599 Ethernet Controller Virtual Function 10ed' if= =3D
>> drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:5e:10.6 '82599 Ethernet Controller Virtual Function 10ed'
>> if=3Dp3p1_3 drv=3Dixgbevf unused=3Dvfio-pci
>> 0000:af:00.0 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp4p1 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:af:00.1 '82599ES 10-Gigabit SFI/SFP+ Network Connection 10fb'=
>> if=3Dp4p2 drv=3Dixgbe unused=3Dvfio-pci
>> 0000:d8:00.0 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f0 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.1 'MT27700 Family [ConnectX-4] 1013' if=3Denp216s0f1 >> drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.2 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f2 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.3 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f3 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.4 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f4 drv=3Dmlx5_core unused=3Dvfio-pci
>> 0000:d8:00.5 'MT27700 Family [ConnectX-4 Virtual Function] 1014' >> if=3Denp216s0f5 drv=3Dmlx5_core unused=3Dvfio-pci
>>
>> The interfaces are up:
>>
>> pcgwpod009-c04:~ # ibdev2netdev -v
>> 0000:3b:00.0 mlx5_0 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f0 (Up)
>> 0000:3b:00.1 mlx5_1 (MT4115 - MT1646K01301) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp59s0f1 (Up)
>> 0000:d8:00.0 mlx5_2 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f0 (Up)
>> 0000:d8:00.1 mlx5_3 (MT4115 - MT1646K00538) CX416A - ConnectX-4 >QSFP28
>> fw 12.28.2006 port 1 (ACTIVE) =3D=3D> enp216s0f1 (Up)
>> 0000:d8:00.2 mlx5_4 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f2 (Up)
>> 0000:d8:00.3 mlx5_5 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f3 (Up)
>> 0000:d8:00.4 mlx5_6 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f4 (Up)
>> 0000:d8:00.5 mlx5_7 (MT4116 - NA)  fw 12.28.2006 port 1 (ACTI= VE) =3D=3D>
>> enp216s0f5 (Up) pcgwpod009-c04:~ #
>>
>>
>> But when I run my application the Mellanox adapters are probed and= I
>obtain the following error:
>>
>> {"proc_id":"6"},"message":"[pio= ] EAL: Probe PCI driver: mlx5_pci
>> (15b3:1014) device: 0000:d8:00.4 (socket 1)"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] net_mlx5: unable to= recognize
>> master/representors on the multiple IB devices"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] common_mlx5: Failed= to load driver =3D
>> net_mlx5."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Requested devi= ce 0000:d8:00.4 cannot be
>> used"}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.826+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] EAL: Bus (pci) prob= e failed."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"info","service_id":"eric-pc-up-= data-plane","metadata":{"proc_id":
>> "6"},"message":"[pio] USER1: ports init f= ail in DPDK, expect 1 ports,
>> actual 0 ports."}
>> {"version":"0.2.0","timestamp":"= ;2022-01-14T09:51:39.890+00:00","sever
>> i
>> ty":"error","service_id":"eric-pc-up= -data-plane","metadata":{"proc_id"
>> :"6"},"message":"[pktio_libpio_init] No n= etwork ports could be
>> enabled!"}
>>
>> Could you please help me with this issue?
>>
>>
>> Thanks,
>>
>> Roc=EDo
>>
>
>
>
>

--_000_DM5PR1201MB2555530FB2C86D96DF5C9C29CD289DM5PR1201MB2555_--