From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 717F645CFB for ; Wed, 13 Nov 2024 22:26:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5D6D2406B8; Wed, 13 Nov 2024 22:26:43 +0100 (CET) Received: from fout-a4-smtp.messagingengine.com (fout-a4-smtp.messagingengine.com [103.168.172.147]) by mails.dpdk.org (Postfix) with ESMTP id DBD7840674 for ; Wed, 13 Nov 2024 22:26:41 +0100 (CET) Received: from phl-compute-12.internal (phl-compute-12.phl.internal [10.202.2.52]) by mailfout.phl.internal (Postfix) with ESMTP id 5897A1380212; Wed, 13 Nov 2024 16:26:41 -0500 (EST) Received: from phl-mailfrontend-01 ([10.202.2.162]) by phl-compute-12.internal (MEProxy); Wed, 13 Nov 2024 16:26:41 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:content-type:date :date:from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to; s=fm3; t=1731533201; x=1731619601; bh=3YvoWdtIKWqhSeZ5ZI/2BEfQd8bWLSEd26OdKzkaciA=; b= urH/HBuPAQVHhuvNPzjj5P0GDvpAWNZ8wLd84bP3oIu6zm0VHikjKgRXT9mV0yrn 4UdxICtMmpk/0PmKe9SUBTnreRF6+cKTP4BgcSis8EPGfo/2x7McRo/uAho4voGO DOGKFT6aL1BimBXvXBZ+5LVfySM4bE+XaOMrfMAoTliTGOtgQhqA86O1yGVp6vqp b1d3lJP+5fBMAfAMdIff57cpZ5lMk0+RImNwaXgsyH+7Sf+a8IDrJO9X9GtYgUqd zB/6Q04eO0Ia4KaTy9AFpVRDC3Eu0lq+UeiyqCYiCWaWbBlarHhPWqXqqd6q260u nv4MWHxEaQxbe5vLWEl6AA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:content-type:date:date:feedback-id:feedback-id :from:from:in-reply-to:in-reply-to:message-id:mime-version :references:reply-to:subject:subject:to:to:x-me-proxy :x-me-sender:x-me-sender:x-sasl-enc; s=fm3; t=1731533201; x= 1731619601; bh=3YvoWdtIKWqhSeZ5ZI/2BEfQd8bWLSEd26OdKzkaciA=; b=g 06jwGi/JvKOWrgvD4p1lNAmDAiojrmutot9Bo96kjHCT+3ZRyeGaX9R3j3dG3B0/ HJQvurf3iYvET4SxMyiamONClze3x6aeomeAPaarkEWMiyV41MZXx/LhOqfc71hb Ct4AcbpMa+WcBVr06cLILrLTXvtfzT+Qi6pQk0ygXthDIPVhEvrppRPFoSMSQ/u5 566VNgIyJ6zFT4ADQblb4PXbPBKHsTxtgM80+G6K+AKMi5c0hIqchqn7FjvXCjdz hsGFAL7Dh5WEbC9+JiIEtPpvYe7FjM7Av1sUwmXCKH9d8ofoUupZ4PQubJ3dXY+U cOZeYgNic08Kk/6J8L8mQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeefuddrvddtgddugeekucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdggtfgfnhhsuhgsshgtrhhisggvpdfu rfetoffkrfgpnffqhgenuceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnh htshculddquddttddmnecujfgurhephffvvefufffkjghfggfgtgesthfuredttddtjeen ucfhrhhomhepvfhhohhmrghsucfoohhnjhgrlhhonhcuoehthhhomhgrshesmhhonhhjrg hlohhnrdhnvghtqeenucggtffrrghtthgvrhhnpeekfeehfedtgedtjeehueeutdfgleef ieevkeeikeelkefflefgtdevieehheffudenucffohhmrghinhepughpughkrdhorhhgne cuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepthhhohhm rghssehmohhnjhgrlhhonhdrnhgvthdpnhgspghrtghpthhtohepfedpmhhouggvpehsmh htphhouhhtpdhrtghpthhtoheptghjsegtjhdrghihpdhrtghpthhtohepuhhsvghrshes ughpughkrdhorhhgpdhrtghpthhtohepughsohhsnhhofihskhhisehnvhhiughirgdrtg homh X-ME-Proxy: Feedback-ID: i47234305:Fastmail Received: by mail.messagingengine.com (Postfix) with ESMTPA; Wed, 13 Nov 2024 16:26:40 -0500 (EST) From: Thomas Monjalon To: CJ Sculti Cc: users@dpdk.org, Dariusz Sosnowski Subject: Re: DPDK with Mellanox ConnectX-5, complaining about mlx5_eth? Date: Wed, 13 Nov 2024 22:26:38 +0100 Message-ID: <9487809.CDJkKcVGEf@thomas> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="utf-8" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 13/11/2024 21:10, CJ Sculti: > I've been running my application for years on igb_uio with Intel NICs. I > recently replaced them with a Mellanox ConnectX-5 2x 40gbps NIC, updated > the DPDK version my application uses, and compiled with support for mlx5 > PMDs. Both 40Gbps ports are up with link, and both are in Ethernet mode, > not Infiniband mode. However, I'm getting complaints when I start my > application about trying to load 'mlx5_eth'? Both are bound to mlx5_core > driver at the moment. When I bind them to vfio-pci, or uio_pci_generic, my > application fails to recognize them at all as valid DPDK devices. Anyone > have any ideas? Also, strange that it only complains about one? I have them > configured in a bond on the kernel, as my application requires that. You must not bind mlx5 devices with VFIO. I recommend reading documentation. You can start here: https://doc.dpdk.org/guides/linux_gsg/linux_drivers.html#bifurcated-driver then https://doc.dpdk.org/guides/platform/mlx5.html#design