From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AD50DA050F for ; Fri, 8 Apr 2022 02:07:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 366E84067E; Fri, 8 Apr 2022 02:07:56 +0200 (CEST) Received: from mail-vs1-f44.google.com (mail-vs1-f44.google.com [209.85.217.44]) by mails.dpdk.org (Postfix) with ESMTP id 6542C4003F for ; Fri, 8 Apr 2022 02:07:54 +0200 (CEST) Received: by mail-vs1-f44.google.com with SMTP id v9so2700761vss.10 for ; Thu, 07 Apr 2022 17:07:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=mime-version:from:date:message-id:subject:to; bh=EfG1cOsJigiCK1zOUukWvFjqZV/HOZ+6uerezY3nmq0=; b=d4TjXVg/IHuxhJ18shW2XLDjjE3QOsu5r36bvTwUZkPRZqs9g0GX4nxWqi0Nh6EhDz pXMH14K7hki8UdgUzcSoaGtXySthDUKEANjWR0gE4C0VzJci+4I9NF96sxbAYTwGM6T+ 6S3KO3PpSS4KHYxCPx76ZnxRTYeNWMd+7Xeh0E6TNhuxqeTkQhdhK6pUrH1rSKR7GXL3 AEy5WEvrSzJSQH1Zt4N53ad//rvbtSgynmYMXMfPzmL3aGi2P4MS1F+Qg4sxQ8aC0yDe cW6k32hDlGgYR6eSQ8NrZXpdSBtps0Ddnq1L2am4izUQgN4+wz82BFdKZQLgXqxDtnwr jtXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:from:date:message-id:subject:to; bh=EfG1cOsJigiCK1zOUukWvFjqZV/HOZ+6uerezY3nmq0=; b=NZB7xUgTwx/HzDex3mkKy+sIycb1GV37I8zNlsZYYxgm7H4RzUHqSPKWMjT4HqORlS 2ag0a/zsi+Uc+PMWPS5CXrfe7Scp4jSI9/Afpqt+93nrJA8Y91+uRQHIlTkgF6lj/RWi gvkNG0WguILOUEF3Sc3XGijGq6LvKRfkpJ2/kYq6WsExaYJEi172uHkRmcQsvJReyJi9 zc1IeEDthalrHDmVNy6J9LK8h37bWGSZfQZTG6Nrkk9M4jGkj3mKjmd5WbwktgyMiVsT LhI1QPAtOv7m1tUDq4TBc9m+XnoqCL5B1EHsqQbha65g3kNm1uZkka2xVaBH3pM1q3qn BgJw== X-Gm-Message-State: AOAM531D15I+cphl5bfLk4rkBGI01a6W+8mi2PxkN4nmnrVjr9nhg+lh ND7k6fin0MqGmhVFFBqlb2tmW+n9heFllgsTmzf24CtQ/lA= X-Google-Smtp-Source: ABdhPJw3oMTfDhEQE0olzhkTixillXkVHFqyl6pRYr6Pfgagsa+Zkvf7ResPwKDxR1dLLaoG89U3ZXc15vnXWnK2VZc= X-Received: by 2002:a05:6102:390e:b0:324:c2b1:f077 with SMTP id e14-20020a056102390e00b00324c2b1f077mr5407589vsu.67.1649376473306; Thu, 07 Apr 2022 17:07:53 -0700 (PDT) MIME-Version: 1.0 From: fwefew 4t4tg <7532yahoo@gmail.com> Date: Thu, 7 Apr 2022 20:07:42 -0400 Message-ID: Subject: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] not receiving (transmitting) packets To: users@dpdk.org Content-Type: multipart/alternative; boundary="000000000000cadc1505dc1962b6" X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --000000000000cadc1505dc1962b6 Content-Type: text/plain; charset="UTF-8" I have two identical metal boxes running Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz. There is no virtualization; so I did NOT enable IOMMU. Both boxes are equipped with two Mellanox Technologies MT27710 NICs and two Intel NICs. The Intel NICs are out of scope. I am using the NIC at 01:00.1 for DPDK: # lspci | grep Eth 01:00.0 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] 01:00.1 Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx] 03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) ibv and mst utilities see both Mellanox NICs. DPDK's testpmd application sees the NIC and reports decent values except perhaps ibv_devinfo which reports Infiniband transport. My application sends UDP packets from one machine to another. I know this code works on AWS ENA NICs. The build for Mellaonx finds the NIC just like DPDK's testpmd reporting the same EAL info. There are no errors transmitting packets; all numbers in stats are exactly what they should be. However, the receiving side never sees any packets. It reports no errors and does not see any packets. Every call to rx_burst sees 0 packets. There is no firewall issue. I can use ncat in tcp and udp mode to send files between the machines just fine. My application is using CRC checksum offload for RX and TX (value 6). And I triple checked the MAC and IP addresses I use in the code. I believe they are fine. I set auto-negotiate on both the RX and TX side. I have also tried to set 10Gbs link speed. No help. I also ran testdpdk then tried to send it packets with ncat; testpmd never sees any packets either. I have read through https://doc.dpdk.org/guides/nics/mlx5.html for proper setup. I found the following deviations: - https://doc.dpdk.org/guides/platform/mlx5.html#mlx5-common-env says to set the link type to Eth: mlxconfig -d query | grep LINK_TYPE. These devices do NOT have a link type and do not allow setting it. I am assuming it only works in Eth mode. - the latest and greatest MST toolkit wget https://www.mellanox.com/downloads/MFT/mft-4.18.0-106-x86_64-deb.tgz does not include the utiliity mlxdevm and neither does the OEFD install wget https://www.mellanox.com/downloads/ofed/MLNX_EN-5.5-1.0.3.2/mlnx-en-5.5-1.0.3.2-ubuntu20.04-x86_64.iso include it. The ISO file does NOT have a utility called mlnxofedinstall. So I am not sure if I am missing something. The doc reads: The firmware, the libraries libibverbs, libmlx5, and mlnx-ofed-kernel modules are packaged in Mellanox OFED. After downloading, it can be installed with this command: ./mlnxofedinstall --dpdk So I just ran ./install --dpdk since the ISO image does have an install script taking the --dpdk argument. The doc https://doc.dpdk.org/guides/platform/mlx5.html#mlx5-common-env mentions a whole bunch of instructions about SRIOV, SF ports etc but which requires mlxdevm. As I say above there is no such utility. So I am stuck. DETAILS: uname -a Linux client 5.13.0-28-generic #31~20.04.1-Ubuntu SMP Wed Jan 19 14:08:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux root@client:~/Dev/reinvent/scripts# ibv_devices device node GUID ------ ---------------- mlx5_0 0c42a1030065fd82 mlx5_1 0c42a1030065fd83 # ibv_devinfo hca_id: mlx5_0 transport: InfiniBand (0) fw_ver: 14.32.1010 node_guid: 0c42:a103:0065:fd82 sys_image_guid: 0c42:a103:0065:fd82 vendor_id: 0x02c9 vendor_part_id: 4117 hw_ver: 0x0 board_id: MT_2420110034 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet hca_id: mlx5_1 transport: InfiniBand (0) fw_ver: 14.32.1010 node_guid: 0c42:a103:0065:fd83 sys_image_guid: 0c42:a103:0065:fd82 vendor_id: 0x02c9 vendor_part_id: 4117 hw_ver: 0x0 board_id: MT_2420110034 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 4096 (5) active_mtu: 1024 (3) sm_lid: 0 port_lid: 0 port_lmc: 0x00 link_layer: Ethernet # mst status -v MST modules: ------------ MST PCI module is not loaded MST PCI configuration module loaded PCI devices: ------------ DEVICE_TYPE MST PCI RDMA NET NUMA ConnectX4LX(rev:0) /dev/mst/mt4117_pciconf0.1 01:00.1 mlx5_1 net-enp1s0f1 -1 ConnectX4LX(rev:0) /dev/mst/mt4117_pciconf0 01:00.0 mlx5_0 net-bond0 -1 DPDK's testpmd application sees and likes the 01:00.1: /root/Dev/dpdk/build/app/dpdk-testpmd --proc-type primary --in-memory --log-level 7 -n 4 --allow 01:00.1,class=eth -- -i EAL: Detected CPU lcores: 16 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Selected IOVA mode 'PA' EAL: No free 2048 kB hugepages reported on node 0 EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:01:00.1 (socket 0) mlx5_net: No available register for sampler. TELEMETRY: No legacy callbacks, legacy socket not created Interactive-mode selected testpmd: create a new mbuf pool : n=267456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=paired and odd forward ports number, the last port will pair with itself. Configuring Port 0 (socket 0) Port 0: 0C:42:A1:65:FD:83 Checking link statuses... Done testpmd> show port info 0 ********************* Infos for port 0 ********************* MAC address: 0C:42:A1:65:FD:83 Device name: 01:00.1 Driver name: mlx5_pci Firmware-version: 14.32.1010 Devargs: class=eth Connect to socket: 0 memory allocation on the socket: 0 Link status: up Link speed: 10 Gbps Link duplex: full-duplex Autoneg status: On MTU: 1500 Promiscuous mode: enabled Allmulticast mode: disabled Maximum number of MAC addresses: 128 Maximum number of MAC addresses of hash filtering: 0 VLAN offload: strip off, filter off, extend off, qinq strip off Hash key size in bytes: 40 Redirection table size: 1 Supported RSS offload flow types: ipv4 ipv4-frag ipv4-tcp ipv4-udp ipv4-other ipv6 ipv6-frag ipv6-tcp ipv6-udp ipv6-other ipv6-ex ipv6-tcp-ex ipv6-udp-ex user defined 60 user defined 61 user defined 62 user defined 63 Minimum size of RX buffer: 32 Maximum configurable length of RX packet: 65536 Maximum configurable size of LRO aggregated packet: 65280 Current number of RX queues: 1 Max possible RX queues: 1024 Max possible number of RXDs per queue: 65535 Min possible number of RXDs per queue: 0 RXDs number alignment: 1 Current number of TX queues: 1 Max possible TX queues: 1024 Max possible number of TXDs per queue: 65535 Min possible number of TXDs per queue: 0 TXDs number alignment: 1 Max segment number per packet: 40 Max segment number per MTU/TSO: 40 Device capabilities: 0x14( RXQ_SHARE FLOW_SHARED_OBJECT_KEEP ) Switch name: 01:00.1 Switch domain Id: 0 Switch Port Id: 65535 Switch Rx domain: 0 testpmd> # lsmod | egrep "(mlx|ib)" | sort ib_cm 53248 2 rdma_cm,ib_ipoib ib_core 368640 8 rdma_cm,ib_ipoib,iw_cm,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm ib_ipoib 135168 0 ib_umad 24576 0 ib_uverbs 139264 2 rdma_ucm,mlx5_ib libahci 36864 1 ahci libcrc32c 16384 2 btrfs,raid456 mlx5_core 1634304 1 mlx5_ib mlx5_ib 397312 0 mlx_compat 69632 11 rdma_cm,ib_ipoib,mlxdevm,iw_cm,ib_umad,ib_core,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,mlx5_core mlxdevm 172032 1 mlx5_core mlxfw 32768 1 mlx5_core pci_hyperv_intf 16384 1 mlx5_core psample 20480 1 mlx5_core tls 94208 2 bonding,mlx5_core --000000000000cadc1505dc1962b6 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
I have two identical metal boxes running=C2=A0Intel(R) Xeo= n(R) E-2278G CPU @ 3.40GHz. There is no virtualization; so I did NOT enable= IOMMU. Both boxes are equipped with two=C2=A0Mellanox Technologies MT27710= NICs and two Intel NICs. The Intel NICs are out of scope. I am using the N= IC at=C2=A001:00.1 for DPDK:

# lspci | grep Eth
01:00.0 Ethernet = controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]
01:00.1= Ethernet controller: Mellanox Technologies MT27710 Family [ConnectX-4 Lx]<= br>03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Conn= ection (rev 03)
04:00.0 Ethernet controller: Intel Corporation I210 Giga= bit Network Connection (rev 03)

ibv and mst utilitie= s see both Mellanox NICs. DPDK's testpmd application sees the NIC and r= eports decent values except perhaps ibv_devinfo which reports Infiniband tr= ansport.=C2=A0

My application sends UDP packets from one machine to = another. I know this code works on AWS ENA NICs. The build for Mellaonx fin= ds the NIC just like DPDK's testpmd reporting the same EAL info. There = are no errors transmitting packets; all numbers in stats are exactly what t= hey should be. However, the receiving side never sees any packets. It repor= ts no errors and does not see any packets. Every call to rx_burst sees 0 pa= ckets. There is no firewall issue. I can use ncat in tcp and udp mode to se= nd files between the machines just fine.

My applic= ation is using CRC checksum offload for RX and TX (value 6). And I triple c= hecked the MAC and IP addresses=C2=A0I use in the code. I believe they are = fine. I set auto-negotiate on both the RX and TX side. I have also tried to= set 10Gbs link speed. No help. I also ran testdpdk then tried to send it p= ackets with ncat; testpmd never sees any packets either.

I ha= ve read through http= s://doc.dpdk.org/guides/nics/mlx5.html for proper setup. I found the fo= llowing deviations:

-=C2=A0https://doc.dpdk.org/guides/platform/mlx5= .html#mlx5-common-env says to set the link type to Eth: mlxconfig -d &l= t;mst device> query | grep LINK_TYPE. These devices do NOT have a link t= ype and do not allow setting it. I am assuming it only works in Eth mode.
- the latest and greatest MST toolkit wget https://www.mellanox= .com/downloads/MFT/mft-4.18.0-106-x86_64-deb.tgz does not include the u= tiliity mlxdevm and neither does the OEFD install wget https://www.mellanox.com/downloads/ofed/MLNX_EN-5.5-1.0= .3.2/mlnx-en-5.5-1.0.3.2-ubuntu20.04-x86_64.iso include it. The ISO fil= e does NOT have a utility called mlnxofedinstall. So I am not sure if I am = missing something. The doc reads:

The firmware, the libraries = libibverbs, libmlx5, and mlnx-ofed-kernel modules are packaged in Mellanox = OFED. After downloading, it can be installed with this command:

./mlnxofedinstall --dpdk

So I just ran ./install --dpdk since the I= SO image does have an install script taking the --dpdk argument.
=
The doc=C2=A0https://doc.dpdk.org/guides/platform/mlx5.html#= mlx5-common-env mentions a whole bunch of instructions about SRIOV, SF = ports etc but which requires mlxdevm. As I say above there is no such utili= ty.

So I am stuck.

DETAILS:

<= /div>
uname -a
Linux client 5.13.0-28-generic #31~20.04.1-Ubuntu SMP= Wed Jan 19 14:08:10 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

root@cl= ient:~/Dev/reinvent/scripts# ibv_devices
=C2=A0 =C2=A0 device =C2=A0 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 node GUID
=C2=A0 =C2=A0 ------ =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 ----------------
=C2=A0 =C2=A0 mlx5_0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 0c42a1030065fd82
=C2=A0 =C2=A0 mlx5_1 =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 0c42a1030065fd83

# ibv_de= vinfo
hca_id: mlx5_0
transport: InfiniBand (0)
fw_ver: 14.= 32.1010
node_guid: 0c42:a103:0065:fd82
sys_image_guid: 0c42:a10= 3:0065:fd82
vendor_id: 0x02c9
vendor_part_id: 4117
hw_ver: = 0x0
board_id: MT_2420110034
phys_port_cnt: 1
port: 1 state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
active_mtu: 10= 24 (3)
sm_lid: 0
port_lid: 0
port_lmc: 0x00
lin= k_layer: Ethernet

hca_id: mlx5_1
transport: InfiniBand (0) fw_ver: 14.32.1010
node_guid: 0c42:a103:0065:fd83
sys_image_= guid: 0c42:a103:0065:fd82
vendor_id: 0x02c9
vendor_part_id: 4= 117
hw_ver: 0x0
board_id: MT_2420110034
phys_port_cnt: 1=
port: 1
state: PORT_ACTIVE (4)
max_mtu: 4096 (5)
= active_mtu: 1024 (3)
sm_lid: 0
port_lid: 0
port_lmc:= 0x00
link_layer: Ethernet

# mst status -v
MST= modules:
------------
=C2=A0 =C2=A0 MST PCI module is not loaded
= =C2=A0 =C2=A0 MST PCI configuration module loaded
PCI devices:
------= ------
DEVICE_TYPE =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 MST =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A0 PCI =C2=A0 =C2=A0 =C2=A0 RDMA =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0NET =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 NUMA =C2=A0
ConnectX4LX(rev:0) =C2=A0 =C2=A0 =C2=A0/dev/ms= t/mt4117_pciconf0.1 =C2=A0 =C2=A001:00.1 =C2=A0 mlx5_1 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0net-enp1s0f1 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= -1 =C2=A0 =C2=A0

ConnectX4LX(rev:0) =C2=A0 =C2=A0 =C2=A0/dev/mst/mt4= 117_pciconf0 =C2=A0 =C2=A0 =C2=A001:00.0 =C2=A0 mlx5_0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0net-bond0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 -1=C2=A0 =C2=A0=C2=A0

DPDK's testpm= d application sees and likes the 01:00.1:

/root/Dev/dpdk/build/app/d= pdk-testpmd --proc-type primary --in-memory --log-level 7 -n 4 --allow 01:0= 0.1,class=3Deth -- -i
EAL: Detected CPU lcores: 16
EAL: Detected NUMA= nodes: 1
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mod= e 'PA'
EAL: No free 2048 kB hugepages reported on node 0
EAL:= VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1015) de= vice: 0000:01:00.1 (socket 0)
mlx5_net: No available register for sample= r.
TELEMETRY: No legacy callbacks, legacy socket not created
Interact= ive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n= =3D267456, size=3D2176, socket=3D0
testpmd: preferred mempool ops select= ed: ring_mp_mc

Warning! port-topology=3Dpaired and odd forward ports= number, the last port will pair with itself.

Configuring Port 0 (so= cket 0)
Port 0: 0C:42:A1:65:FD:83
Checking link statuses...
Donetestpmd> show port info 0

********************* Infos for port = 0 =C2=A0*********************
MAC address: 0C:42:A1:65:FD:83
Device n= ame: 01:00.1
Driver name: mlx5_pci
Firmware-version: 14.32.1010
De= vargs: class=3Deth
Connect to socket: 0
memory allocation on the sock= et: 0
Link status: up
Link speed: 10 Gbps
Link duplex: full-duplex=
Autoneg status: On
MTU: 1500
Promiscuous mode: enabled
Allmult= icast mode: disabled
Maximum number of MAC addresses: 128
Maximum num= ber of MAC addresses of hash filtering: 0
VLAN offload:
=C2=A0 strip= off, filter off, extend off, qinq strip off
Hash key size in bytes: 40<= br>Redirection table size: 1
Supported RSS offload flow types:
=C2=A0= ipv4
=C2=A0 ipv4-frag
=C2=A0 ipv4-tcp
=C2=A0 ipv4-udp
=C2=A0 i= pv4-other
=C2=A0 ipv6
=C2=A0 ipv6-frag
=C2=A0 ipv6-tcp
=C2=A0 i= pv6-udp
=C2=A0 ipv6-other
=C2=A0 ipv6-ex
=C2=A0 ipv6-tcp-ex
=C2= =A0 ipv6-udp-ex
=C2=A0 user defined 60
=C2=A0 user defined 61
=C2= =A0 user defined 62
=C2=A0 user defined 63
Minimum size of RX buffer:= 32
Maximum configurable length of RX packet: 65536
Maximum configura= ble size of LRO aggregated packet: 65280
Current number of RX queues: 1<= br>Max possible RX queues: 1024
Max possible number of RXDs per queue: 6= 5535
Min possible number of RXDs per queue: 0
RXDs number alignment: = 1
Current number of TX queues: 1
Max possible TX queues: 1024
Max = possible number of TXDs per queue: 65535
Min possible number of TXDs per= queue: 0
TXDs number alignment: 1
Max segment number per packet: 40<= br>Max segment number per MTU/TSO: 40
Device capabilities: 0x14( RXQ_SHA= RE FLOW_SHARED_OBJECT_KEEP )
Switch name: 01:00.1
Switch domain Id: 0=
Switch Port Id: 65535
Switch Rx domain: 0
testpmd>=C2=A0

# lsmod | egrep "(mlx|ib)" | sort
ib_c= m =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A053248 =C2= =A02 rdma_cm,ib_ipoib
ib_core =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 368640 =C2=A08 rdma_cm,ib_ipoib,iw_cm,ib_umad,rdma_ucm,ib_uverbs,mlx= 5_ib,ib_cm
ib_ipoib =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A01351= 68 =C2=A00
ib_umad =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A024576 =C2=A00
ib_uverbs =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 139= 264 =C2=A02 rdma_ucm,mlx5_ib
libahci =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A036864 =C2=A01 ahci
libcrc32c =C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A016384 =C2=A02 btrfs,raid456
mlx5_core =C2=A0 = =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A01634304 =C2=A01 mlx5_ib
mlx5_ib =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 397312 =C2=A00
mlx_compat =C2= =A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 69632 =C2=A011 rdma_cm,ib_ipoib,mlxd= evm,iw_cm,ib_umad,ib_core,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,mlx5_core
mlx= devm =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 172032 =C2=A01 mlx5_c= ore
mlxfw =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= 32768 =C2=A01 mlx5_core
pci_hyperv_intf =C2=A0 =C2=A0 =C2=A0 =C2=A016384= =C2=A01 mlx5_core
psample =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2= =A0 =C2=A020480 =C2=A01 mlx5_core
tls =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0= =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A094208 =C2=A02 bonding,mlx5_core
--000000000000cadc1505dc1962b6--