From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66A96A0350 for ; Tue, 22 Feb 2022 08:10:19 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EE11340DF4; Tue, 22 Feb 2022 08:10:18 +0100 (CET) Received: from out4-smtp.messagingengine.com (out4-smtp.messagingengine.com [66.111.4.28]) by mails.dpdk.org (Postfix) with ESMTP id A71254068C for ; Tue, 22 Feb 2022 08:10:17 +0100 (CET) Received: from compute5.internal (compute5.nyi.internal [10.202.2.45]) by mailout.nyi.internal (Postfix) with ESMTP id 387485C0256; Tue, 22 Feb 2022 02:10:15 -0500 (EST) Received: from mailfrontend1 ([10.202.2.162]) by compute5.internal (MEProxy); Tue, 22 Feb 2022 02:10:15 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=monjalon.net; h= cc:cc:content-transfer-encoding:content-type:date:date:from:from :in-reply-to:in-reply-to:message-id:mime-version:references :reply-to:sender:subject:subject:to:to; s=fm3; bh=Yv/BcxrkhcPOht BlS4wDEaVMCvznNThJw2X0Aw3ASHo=; b=NavEIcmTafqCJ8wlIUOPtwomH+oIxu b28mRK9/09hz7jnlRHpCI8/o0nY7pDJyZQFZebnnb44LZb0WYRqDX7yRK7V8xqFw NOFVPEtezVHaM2TDRYlFqCsB8QoAHq8QcMIHs+k7DAbcdPrtCO5z++rMGjfr1jof /zgmRuR+d7Cds/UM6B9yjxdjL8Tm+VePdaSuximqPxUP8o0jSogo3IT7SVil75t/ 0az7pdQJh9Be4+amT9jkMtd7aSKMpYkAyp1eEPLH5k/FAGmtUT6O8s0MpCdIF9Pu CumX9wQ8clB4QuqKXShox8mb7U+rb7f4ad9JeiFJrnZ45JXPWKXHn5oA== DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d= messagingengine.com; h=cc:cc:content-transfer-encoding :content-type:date:date:from:from:in-reply-to:in-reply-to :message-id:mime-version:references:reply-to:sender:subject :subject:to:to:x-me-proxy:x-me-proxy:x-me-sender:x-me-sender :x-sasl-enc; s=fm2; bh=Yv/BcxrkhcPOhtBlS4wDEaVMCvznNThJw2X0Aw3AS Ho=; b=oZD69yKakoorjTqrUyrnqZInMNgQXGH572tUjzjaHqL99INrTuBbcB8Ie 68RceEwzyV5VFWlinKzOshFrg/5VyGyJ6j/f2cRLfgTbdJqeVFADeNok7B44AL2C 8vJKngXTz8YSESvH/aTd87hVrQo+EJH2iHAj0m91U474p9pDK9hBhrjP8FjTl/a/ eY1KnIYUoXhfuOK4Fx+pUp1jZwuXY8M2dChjtAC6AnKvRkdBCwADhGUBFG8erqfy Yi1xBxRX2pMisxPuo25GAVw9UXwV/cCRLBKoc3584fEz/IRvM8Ofx2Y5cBW7Wyi2 K3pyii3vLgZ82Ra1OUYa/GCWm9RmQ== X-ME-Sender: X-ME-Received: X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgedvvddrkeejgddutddvucetufdoteggodetrfdotf fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen uceurghilhhouhhtmecufedttdenucenucfjughrpefhvffufffkjghfggfgtgesthfure dttddtvdenucfhrhhomhepvfhhohhmrghsucfoohhnjhgrlhhonhcuoehthhhomhgrshes mhhonhhjrghlohhnrdhnvghtqeenucggtffrrghtthgvrhhnpeduffdtjeekkefhueehve ejkeeludevgeelgffhuedvvedvvddvvedttefggfeileenucffohhmrghinhepphhrohho fhhpohhinhhtrdgtohhmnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrg hilhhfrhhomhepthhhohhmrghssehmohhnjhgrlhhonhdrnhgvth X-ME-Proxy: Received: by mail.messagingengine.com (Postfix) with ESMTPA; Tue, 22 Feb 2022 02:10:14 -0500 (EST) From: Thomas Monjalon To: Aaron Lee Cc: users@dpdk.org, asafp@nvidia.com Subject: Re: ConnectX5 Setup with DPDK Date: Tue, 22 Feb 2022 08:10:12 +0100 Message-ID: <1848868.MyG8hOvIyE@thomas> In-Reply-To: References: MIME-Version: 1.0 Content-Transfer-Encoding: 7Bit Content-Type: text/plain; charset="us-ascii" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org 21/02/2022 21:10, Aaron Lee: > Hi Thomas, > > Actually I remembered in my previous setup I had run dpdk-devbind.py to > bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do > this and just wanted to confirm that this is correct. Indeed, mlx5 PMD runs on top of mlx5 kernel driver. We don't need UIO or VFIO drivers. The kernel modules must remain loaded and can be used in the same time. When DPDK is working, the traffic goes to the userspace PMD by default, but it is possible to configure some flows to go directly to the kernel driver. This behaviour is called "bifurcated model". > On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee wrote: > > > Hi Thomas, > > > > I tried installing things from scratch two days ago and have gotten > > things working! I think part of the problem was figuring out the correct > > hugepage allocation for my system. If I recall correctly, I tried setting > > up my system with default page size 1G but perhaps didn't have enough pages > > allocated at the time. Currently have the following which gives me the > > output you've shown previously. > > > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s > > Node Pages Size Total > > 0 16 1Gb 16Gb > > 1 16 1Gb 16Gb > > > > root@yeti-04:~/dpdk-21.11# echo show port summary all | > > build/app/dpdk-testpmd --in-memory -- -i > > EAL: Detected CPU lcores: 80 > > EAL: Detected NUMA nodes: 2 > > EAL: Detected static linkage of DPDK > > EAL: Selected IOVA mode 'PA' > > EAL: No free 2048 kB hugepages reported on node 0 > > EAL: No free 2048 kB hugepages reported on node 1 > > EAL: No available 2048 kB hugepages reported > > EAL: VFIO support initialized > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1) > > TELEMETRY: No legacy callbacks, legacy socket not created > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=779456, size=2176, socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > testpmd: create a new mbuf pool : n=779456, size=2176, socket=1 > > testpmd: preferred mempool ops selected: ring_mp_mc > > > > Warning! port-topology=paired and odd forward ports number, the last port > > will pair with itself. > > > > Configuring Port 0 (socket 1) > > Port 0: EC:0D:9A:68:21:A8 > > Checking link statuses... > > Done > > testpmd> show port summary all > > Number of available ports: 1 > > Port MAC Address Name Driver Status Link > > 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps > > > > Best, > > Aaron > > > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon > > wrote: > > > >> 21/02/2022 19:52, Thomas Monjalon: > >> > 18/02/2022 22:12, Aaron Lee: > >> > > Hello, > >> > > > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm > >> > > wondering if the card I have simply isn't compatible. I first noticed > >> that > >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the > >> error > >> > > logs when running dpdk-pdump. > >> > > >> > When testing a NIC, it is more convenient to use dpdk-testpmd. > >> > > >> > > EAL: Detected CPU lcores: 80 > >> > > EAL: Detected NUMA nodes: 2 > >> > > EAL: Detected static linkage of DPDK > >> > > EAL: Multi-process socket > >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > >> file or > >> > > directory > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp > >> > > vdev_scan(): Failed to request vdev from primary > >> > > EAL: Selected IOVA mode 'PA' > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > >> file or > >> > > directory > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync > >> > > EAL: Cannot request default VFIO container fd > >> > > EAL: VFIO support could not be initialized > >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 > >> (socket 1) > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > >> file or > >> > > directory > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp > >> > > mlx5_common: port 0 request to primary process failed > >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering > >> an > >> > > error: No such file or directory > >> > > mlx5_common: Failed to load driver mlx5_eth > >> > > EAL: Requested device 0000:af:00.0 cannot be used > >> > > EAL: Error - exiting with code: 1 > >> > > Cause: No Ethernet ports - bye > >> > > >> > From this log, we miss the previous steps before running the > >> application. > >> > > >> > Please check these simple steps: > >> > - install rdma-core > >> > - build dpdk (meson build && ninja -C build) > >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) > >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd > >> --in-memory -- -i) > >> > > >> > EAL: Detected CPU lcores: 10 > >> > EAL: Detected NUMA nodes: 1 > >> > EAL: Detected static linkage of DPDK > >> > EAL: Selected IOVA mode 'PA' > >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 > >> (socket 0) > >> > Interactive-mode selected > >> > testpmd: create a new mbuf pool : n=219456, size=2176, > >> socket=0 > >> > testpmd: preferred mempool ops selected: ring_mp_mc > >> > Configuring Port 0 (socket 0) > >> > Port 0: 0C:42:A1:D6:E0:00 > >> > Checking link statuses... > >> > Done > >> > testpmd> show port summary all > >> > Number of available ports: 1 > >> > Port MAC Address Name Driver Status Link > >> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps > >> > > >> > > I noticed that the pci id of the card I was given is 15b3:1017 as > >> below. > >> > > This sort of indicates to me that the PMD driver isn't supported on > >> this > >> > > card. > >> > > >> > This card is well supported and even officially tested with DPDK 21.11, > >> > as you can see in the release notes: > >> > > >> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e= > >> > > >> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 > >> Family > >> > > [ConnectX-5] [15b3:1017] > >> > > > >> > > I'd appreciate it if someone has gotten this card to work with DPDK to > >> > > point me in the right direction or if my suspicions were correct that > >> this > >> > > card doesn't work with the PMD. > >> > >> If you want to check which hardware is supported by a PMD, > >> you can use this command: > >> > >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so > >> PMD NAME: mlx5_eth > >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib > >> PMD HW SUPPORT: > >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All > >> Subdevices) > >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual > >> Function] (1014) (All Subdevices) > >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) > >> (All Subdevices) > >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual > >> Function] (1016) (All Subdevices) > >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All > >> Subdevices) > >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual > >> Function] (1018) (All Subdevices) > >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) > >> (All Subdevices) > >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual > >> Function] (101a) (All Subdevices) > >> Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 > >> network controller (a2d2) (All Subdevices) > >> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family > >> VF (a2d3) (All Subdevices) > >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All > >> Subdevices) > >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual > >> Function] (101c) (All Subdevices) > >> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All > >> Subdevices) > >> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function > >> (101e) (All Subdevices) > >> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6 > >> Dx network controller (a2d6) (All Subdevices) > >> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All > >> Subdevices) > >> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All > >> Subdevices) > >> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7 > >> network controller (a2dc) (All Subdevices) > >> > >> > Please tell me what drove you into the wrong direction, > >> > because I really would like to improve the documentation & tools. > >> > >> > >> > >> >