From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F75345CFB for ; Wed, 13 Nov 2024 22:20:53 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0CEB406B8; Wed, 13 Nov 2024 22:20:52 +0100 (CET) Received: from mail-yb1-f172.google.com (mail-yb1-f172.google.com [209.85.219.172]) by mails.dpdk.org (Postfix) with ESMTP id 01E7840674 for ; Wed, 13 Nov 2024 21:06:31 +0100 (CET) Received: by mail-yb1-f172.google.com with SMTP id 3f1490d57ef6-e380d1389a1so344986276.2 for ; Wed, 13 Nov 2024 12:06:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cj-gy.20230601.gappssmtp.com; s=20230601; t=1731528391; x=1732133191; darn=dpdk.org; h=to:subject:message-id:date:from:mime-version:from:to:cc:subject :date:message-id:reply-to; bh=YBmKyaB9MNYNQk6SV9cl3CcCkN0UI8nx/twgydELnZg=; b=x8K62sdeu8cMO7Aw4qUh7vehvVfLtAYNWQQDQrcJuwkzg+Dq0+EYAy7ItmK3eU+LcF qRBn9Nc8uD7jm0uf0BbxBsQ0Gvo0EBleTwkyhLMaBXmZ3+o01eVVOrPMATO6vvu2dKfj bNDvVxEnGp2jWtxzhugOQgQdJwMxkG7HaT3z0X3cURAtYVFBHy/KOWfbLMSIZ4FKLg8T WVLZaeK7xzyZViqOAZCUwpfsPp8k31LeUPKAKg+xmIdXeVWu3Kejy0QQ/f3EEAoRG2eq HCyx2Y+cnntNlgYDA5mmu9ESu6vglzToLUd/ySqT3HtbMJcYh81ZLA1AaoeLY6oIyjM9 sInw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1731528391; x=1732133191; h=to:subject:message-id:date:from:mime-version:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=YBmKyaB9MNYNQk6SV9cl3CcCkN0UI8nx/twgydELnZg=; b=cK7Jky3gXkkb7fKNfARoQMKtWvVBGpBctH1FHE81lTD4cwwWi8v5Aq3WdrdiSY1Gd8 kRT3Ch//q8yaxCkkVCu2BJJOppjSTY6a2fhhENPKAVdhQQjwHq5exUpncNvbTFBRxMGE aD4ZEdUWVDPYNkomv+uRc6WKtHYADD7Nc/2cLfL3LoOW+Wgj48WTgc/faTDMdGjV5iwh zPPZ/IBRv+ZvICljcKt2kq3FCC3ulz2Zko/zNMt2yLbRaA5z68n7eNazs17lPRCR80dS 35qVjsZ721ZKcHXJW0/2YPeYph4/QWZ8T2IuA2PaUwo0n6DgZNuR5VhSz52fS8ySSXN0 mTIg== X-Gm-Message-State: AOJu0YyRhBdUxWGxUqLODE2El8v9w1Xs135CPB4kHXCnJmVZaxBRiX3Z 2ylhIhRZEJvhV7Klr8yb06Sd62s8QTcLcs4HeQuiVU0m3ghQPnr9BVWHVgAuKiG+do6dM42dumM BLgAmHZ9tl1B821DA+faPJVQ9+YyyQKOsIEO+U7GrUliTLSqZJ6g= X-Google-Smtp-Source: AGHT+IHDMqr2FAsGNWzbq64nV2FbYS8JdJ/ZhXvEIstmL7V/SIjUcvjiSrtH7Er8mOgohYcGRP0xsc4aWenY/8EFppQ= X-Received: by 2002:a05:690c:3803:b0:6e3:21fa:e50f with SMTP id 00721157ae682-6ecb32cb7c0mr53376477b3.13.1731528390784; Wed, 13 Nov 2024 12:06:30 -0800 (PST) MIME-Version: 1.0 From: CJ Sculti Date: Wed, 13 Nov 2024 15:06:17 -0500 Message-ID: Subject: DPDK With Mellanox ConnectX-5 To: users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000a6d7dd0626d0de72" X-Mailman-Approved-At: Wed, 13 Nov 2024 22:20:52 +0100 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000a6d7dd0626d0de72 Content-Type: text/plain; charset="UTF-8" I've been running my application for years on igb_uio with Intel NICs. I recently replaced them with a Mellanox ConnectX-5 2x 40gbps NIC, updated the DPDK version my application uses, and compiled with support for mlx5 PMDs. Both 40Gbps ports are up with link, and both are in Ethernet mode, not Infiniband mode. However, I'm getting complaints when I start my application about trying to load 'mlx5_eth'? Both are bound to mlx5_core driver at the moment. When I bind them to vfio-pci, or uio_pci_generic, my application fails to recognize them at all as valid DPDK devices. Anyone have any ideas? Also, strange that it only complains about one? I have them configured in a bond on the kernel, as my application requires that. Network devices using kernel driver =================================== 0000:2b:00.0 'MT27800 Family [ConnectX-5] 1017' if=enp43s0f0np0 drv=mlx5_core unused=vfio-pci 0000:2b:00.1 'MT27800 Family [ConnectX-5] 1017' if=enp43s0f1np1 drv=mlx5_core unused=vfio-pci root@DDoSMitigation:~/anubis/engine/bin# ./anubis-engine Electric Fence 2.2 Copyright (C) 1987-1999 Bruce Perens EAL: Detected CPU lcores: 12 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:2b:00.0 (socket -1) EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:2b:00.1 (socket -1) mlx5_net: PF 0 doesn't have Verbs device matches PCI device 0000:2b:00.1, are kernel drivers loaded? mlx5_common: Failed to load driver mlx5_eth EAL: Requested device 0000:2b:00.1 cannot be used TELEMETRY: No legacy callbacks, legacy socket not created USER1: Anubis build master/. USER1: We will run on 12 logical cores. USER1: Enabled lcores not a power of 2! This could have performance issues. KNI: WARNING: KNI is deprecated and will be removed in DPDK 23.11 USER1: Failed to reset link fe0. --000000000000a6d7dd0626d0de72 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I've been running my application for years on igb_uio = with Intel NICs. I recently replaced them with a Mellanox ConnectX-5 2x 40g= bps NIC, updated the DPDK version my application uses, and compiled with su= pport for mlx5 PMDs. Both 40Gbps ports are up with link, and both are in Et= hernet mode, not Infiniband mode. However, I'm getting complaints when = I start my application about trying to load 'mlx5_eth'? Both are bo= und to mlx5_core driver at the moment. When I bind them to vfio-pci, or uio= _pci_generic, my application fails to recognize them at all as valid DPDK d= evices. Anyone have any ideas? Also, strange that it only complains about o= ne? I have them configured in a bond on the kernel, as my application requi= res that.

Networ= k devices using kernel driver

=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D= =3D=3D=3D=3D=3D=3D=3D=3D

0000:2= b:00.0 'MT27800 Family [ConnectX-5] 1017' if=3Denp43s0f0np0 drv=3Dm= lx5_core unused=3Dvfio-pci=C2= =A0

0000:2= b:00.1 'MT27800 Family [ConnectX-5] 1017' if=3Denp43s0f1np1 drv=3Dm= lx5_core unused=3Dvfio-pci=C2= =A0


root= @DDoSMitigation:~/anubis/engine/bin# ./anubis-engine=C2=A0


=C2=A0 Electric Fence 2.2 Copy= right (C) 1987-1999 Bruce Perens <br= uce@perens.com>

EAL: D= etected CPU lcores: 12

EAL: D= etected NUMA nodes: 1

EAL: D= etected static linkage of DPDK

EAL: M= ulti-process socket /var/run/dpdk/rte/mp_socket

EAL: S= elected IOVA mode 'VA'

EAL: V= FIO support initialized

EAL: P= robe PCI driver: mlx5_pci (15b3:1017) device: 0000:2b:00.0 (socket -1)

EAL: P= robe PCI driver: mlx5_pci (15b3:1017) device: 0000:2b:00.1 (socket -1)

mlx5_n= et: PF 0 doesn't have Verbs device matches PCI device 0000:2b:00.1, are= kernel drivers loaded?

mlx5_c= ommon: Failed to load driver mlx5_eth

EAL: R= equested device 0000:2b:00.1 cannot be used

TELEME= TRY: No legacy callbacks, legacy socket not created

USER1:= Anubis build master/.

USER1:= We will run on 12 logical cores.

USER1:= Enabled lcores not a power of 2!

This c= ould have performance issues.

KNI: W= ARNING: KNI is deprecated and will be removed in DPDK 23.11

USER1:= Failed to reset link fe0.

--000000000000a6d7dd0626d0de72--