From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A303A034F for ; Mon, 21 Feb 2022 20:45:44 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 582484068C; Mon, 21 Feb 2022 20:45:44 +0100 (CET) Received: from mx0a-0016e101.pphosted.com (mx0a-0016e101.pphosted.com [148.163.145.30]) by mails.dpdk.org (Postfix) with ESMTP id EB8B64013F for ; Mon, 21 Feb 2022 20:45:42 +0100 (CET) Received: from pps.filterd (m0151355.ppops.net [127.0.0.1]) by mx0a-0016e101.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21LIatT8019006 for ; Mon, 21 Feb 2022 11:45:41 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsd.edu; h=mime-version : references : in-reply-to : from : date : message-id : subject : to : cc : content-type; s=campus; bh=OVJgIycg15LCWHCeAqdTl/08Z6awE8xcrb8190VEnZ8=; b=UGGoL6FMtCM+nldwKbCeYox5Wl4XA3whlOnb24yteLtHRLcdjOquMj4xNyphiEV9YMsl FAF+uxrFEHAxhlk8/+rBKAjnlbn4ISWbJccpF8Lz/eoK6XKRqoVz7yVpffrUd2Fi3Izh LpIB0ATj7uStSDoMoXq8ujHoiHUkXtm0ZBugs/TMOPf8Y35tB7LEyjpN+YboCc64gYnp GCU8DABnYO5lX1KgUdaYzTfqhd9cgKMYO4rYHKqUIQYJmCdX/e+MflSidOVFFbErrMfu Awb//+QkGlHOEfHLNnU+AfJOiLSLozJ1Eu45XsYZLf1wfkJN5fUOQgAsMzRb4OQMRCGi 1g== Received: from mail-pf1-f199.google.com (mail-pf1-f199.google.com [209.85.210.199]) by mx0a-0016e101.pphosted.com (PPS) with ESMTPS id 3ebhbeuxyq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 21 Feb 2022 11:45:41 -0800 Received: by mail-pf1-f199.google.com with SMTP id j204-20020a6280d5000000b004e107ad3488so5555143pfd.15 for ; Mon, 21 Feb 2022 11:45:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsd.edu; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=OVJgIycg15LCWHCeAqdTl/08Z6awE8xcrb8190VEnZ8=; b=M157PFxPkBk7zqEGk4VaY7KBLZI1Rnd/01xcQUqvjUwSKYea+HcYRE6V8SjXuaoUvV shOsfscwTT3axk2SME4frbADWDRkS55/1CEuUQIyuLWxL/8a6YTdZBT17mkZuqffYaUF MgA3gF/RFUxuld96EpL7C+cxB1d98Me3YgXo4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=OVJgIycg15LCWHCeAqdTl/08Z6awE8xcrb8190VEnZ8=; b=u2L0SNckumvLS3W6NFlhNNyLWNheulM1C+daSP+J19WSzNtGte5XcIFZn44pWKGRJy F8bhskcPvaXdK9C+VG/IFmSZNi4NyKWny9ldnk2tu8MJcg7gkEIowuDg52Rfv5/7LRa2 wzqy8WqEoc63Ek5RU3PmqFhdIYKLo4DMCojdykxPGxla0TZ/6vLYXPBnjl0s3A841aMv p/1G9IUenIjpJYl++o76p9w/2l8SyhJ+J9shkuYZXYmBvLEZ56I3y2dhBx5Q2IfwRRYB /PvfAczEtNvNCwZOK32d/dLa/IX48UaQbX5q49eV5fSSLsGEWem0OjHWbiSHX7dles2b q+dw== X-Gm-Message-State: AOAM531G4J8D/LUKW2LVGAQmD4YKMhK+PGVNa2lJXjXoCIA0b+Jc17Wo ktLvOzaORngiforMptXQmP8uklI2j/Rh+lPUOBHFlsg7pwbRjtBLPyql9UmcD5fTIxWq7y9jfIg 6SeWTn5aUUVJqG8/Ogrcv X-Received: by 2002:a05:6a00:130e:b0:4cc:3c7d:4dec with SMTP id j14-20020a056a00130e00b004cc3c7d4decmr21742611pfu.32.1645472740532; Mon, 21 Feb 2022 11:45:40 -0800 (PST) X-Google-Smtp-Source: ABdhPJzmSA193UylIIkHbsJ+EkTUjNsZ9btvPU0lpwqNM+rLwDhz73y+DuevbCCpRnmF5Xf0+RU4szXkxwV7tazdB1Q= X-Received: by 2002:a05:6a00:130e:b0:4cc:3c7d:4dec with SMTP id j14-20020a056a00130e00b004cc3c7d4decmr21742593pfu.32.1645472740257; Mon, 21 Feb 2022 11:45:40 -0800 (PST) MIME-Version: 1.0 References: <1707201.A0I09U8b9p@thomas> <3339719.uBEoKPz9u1@thomas> In-Reply-To: <3339719.uBEoKPz9u1@thomas> From: Aaron Lee Date: Mon, 21 Feb 2022 11:45:29 -0800 Message-ID: Subject: Re: ConnectX5 Setup with DPDK To: Thomas Monjalon Cc: users@dpdk.org, asafp@nvidia.com Content-Type: multipart/alternative; boundary="0000000000002bc7e905d88c7af1" X-campus_gsuite: gsuite_33445511 X-Proofpoint-GUID: -Kw3nBk1wKtr1kLhPDP69fC_sMnBUK0A X-Proofpoint-ORIG-GUID: -Kw3nBk1wKtr1kLhPDP69fC_sMnBUK0A pp_allow_relay: proofpoint_allowed X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-21_09,2022-02-21_02,2021-12-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 malwarescore=0 impostorscore=0 adultscore=0 bulkscore=0 mlxscore=0 phishscore=0 clxscore=1015 spamscore=0 mlxlogscore=999 lowpriorityscore=0 suspectscore=0 classifier=spam adjust=-80 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202210117 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --0000000000002bc7e905d88c7af1 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Thomas, I tried installing things from scratch two days ago and have gotten things working! I think part of the problem was figuring out the correct hugepage allocation for my system. If I recall correctly, I tried setting up my system with default page size 1G but perhaps didn't have enough pages allocated at the time. Currently have the following which gives me the output you've shown previously. root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s Node Pages Size Total 0 16 1Gb 16Gb 1 16 1Gb 16Gb root@yeti-04:~/dpdk-21.11# echo show port summary all | build/app/dpdk-testpmd --in-memory -- -i EAL: Detected CPU lcores: 80 EAL: Detected NUMA nodes: 2 EAL: Detected static linkage of DPDK EAL: Selected IOVA mode 'PA' EAL: No free 2048 kB hugepages reported on node 0 EAL: No free 2048 kB hugepages reported on node 1 EAL: No available 2048 kB hugepages reported EAL: VFIO support initialized EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1) TELEMETRY: No legacy callbacks, legacy socket not created Interactive-mode selected testpmd: create a new mbuf pool : n=3D779456, size=3D2176, socke= t=3D0 testpmd: preferred mempool ops selected: ring_mp_mc testpmd: create a new mbuf pool : n=3D779456, size=3D2176, socke= t=3D1 testpmd: preferred mempool ops selected: ring_mp_mc Warning! port-topology=3Dpaired and odd forward ports number, the last port will pair with itself. Configuring Port 0 (socket 1) Port 0: EC:0D:9A:68:21:A8 Checking link statuses... Done testpmd> show port summary all Number of available ports: 1 Port MAC Address Name Driver Status Link 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps Best, Aaron On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon wrote: > 21/02/2022 19:52, Thomas Monjalon: > > 18/02/2022 22:12, Aaron Lee: > > > Hello, > > > > > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm > > > wondering if the card I have simply isn't compatible. I first noticed > that > > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the err= or > > > logs when running dpdk-pdump. > > > > When testing a NIC, it is more convenient to use dpdk-testpmd. > > > > > EAL: Detected CPU lcores: 80 > > > EAL: Detected NUMA nodes: 2 > > > EAL: Detected static linkage of DPDK > > > EAL: Multi-process socket > /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 > > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > file or > > > directory > > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp > > > vdev_scan(): Failed to request vdev from primary > > > EAL: Selected IOVA mode 'PA' > > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > file or > > > directory > > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_syn= c > > > EAL: Cannot request default VFIO container fd > > > EAL: VFIO support could not be initialized > > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 > (socket 1) > > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such > file or > > > directory > > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp > > > mlx5_common: port 0 request to primary process failed > > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering > an > > > error: No such file or directory > > > mlx5_common: Failed to load driver mlx5_eth > > > EAL: Requested device 0000:af:00.0 cannot be used > > > EAL: Error - exiting with code: 1 > > > Cause: No Ethernet ports - bye > > > > From this log, we miss the previous steps before running the applicatio= n. > > > > Please check these simple steps: > > - install rdma-core > > - build dpdk (meson build && ninja -C build) > > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) > > - run testpmd (echo show port summary all | build/app/dpdk-testpmd > --in-memory -- -i) > > > > EAL: Detected CPU lcores: 10 > > EAL: Detected NUMA nodes: 1 > > EAL: Detected static linkage of DPDK > > EAL: Selected IOVA mode 'PA' > > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 (socke= t > 0) > > Interactive-mode selected > > testpmd: create a new mbuf pool : n=3D219456, size=3D2176, > socket=3D0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > Configuring Port 0 (socket 0) > > Port 0: 0C:42:A1:D6:E0:00 > > Checking link statuses... > > Done > > testpmd> show port summary all > > Number of available ports: 1 > > Port MAC Address Name Driver Status Link > > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps > > > > > I noticed that the pci id of the card I was given is 15b3:1017 as > below. > > > This sort of indicates to me that the PMD driver isn't supported on > this > > > card. > > > > This card is well supported and even officially tested with DPDK 21.11, > > as you can see in the release notes: > > > https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__doc.dpdk.org_guide= s_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=3DDwICAg&c=3D-3= 5OiAkTchMrZOngvJPOeA&r=3DhV5L_ta1W9AMUIlRhnTmeA&m=3DSjlw2sMlSxVzIY1zsNBhZue= u7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=3DioqgYPqVWdF2XE0iOZ4AZn5Vw_NGm= tr5m9fYCf_TY9A&e=3D > > > > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 > Family > > > [ConnectX-5] [15b3:1017] > > > > > > I'd appreciate it if someone has gotten this card to work with DPDK t= o > > > point me in the right direction or if my suspicions were correct that > this > > > card doesn't work with the PMD. > > If you want to check which hardware is supported by a PMD, > you can use this command: > > usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so > PMD NAME: mlx5_eth > PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib > PMD HW SUPPORT: > Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All > Subdevices) > Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual > Function] (1014) (All Subdevices) > Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) (Al= l > Subdevices) > Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual > Function] (1016) (All Subdevices) > Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All > Subdevices) > Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual > Function] (1018) (All Subdevices) > Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) (Al= l > Subdevices) > Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual > Function] (101a) (All Subdevices) > Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 > network controller (a2d2) (All Subdevices) > Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family V= F > (a2d3) (All Subdevices) > Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All > Subdevices) > Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual > Function] (101c) (All Subdevices) > Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All > Subdevices) > Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function > (101e) (All Subdevices) > Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6 > Dx network controller (a2d6) (All Subdevices) > Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All > Subdevices) > Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All > Subdevices) > Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7 > network controller (a2dc) (All Subdevices) > > > Please tell me what drove you into the wrong direction, > > because I really would like to improve the documentation & tools. > > > > --0000000000002bc7e905d88c7af1 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Thomas,

I tried ins= talling things from scratch two days ago and have gotten things=C2=A0workin= g! I think part of the problem was figuring out the correct hugepage alloca= tion for my system. If I recall=C2=A0correctly, I tried setting up my syste= m with default page size 1G but perhaps didn't have enough pages alloca= ted at the time. Currently have the following which gives me the output you= 've shown previously.

root@yeti-04:~/dpdk-21.1= 1# usertools/dpdk-hugepages.py -s
Node Pages Size Total
0 =C2=A0 =C2= =A016 =C2=A0 =C2=A01Gb =C2=A0 =C2=A016Gb
1 =C2=A0 =C2=A016 =C2=A0 =C2=A0= 1Gb =C2=A0 =C2=A016Gb

root@yeti-04:~/dpdk-21.1= 1# echo show port summary all | build/app/dpdk-testpmd --in-memory -- -iEAL: Detected CPU lcores: 80
EAL: Detected NUMA nodes: 2
EAL: Detect= ed static linkage of DPDK
EAL: Selected IOVA mode 'PA'
EAL: N= o free 2048 kB hugepages reported on node 0
EAL: No free 2048 kB hugepag= es reported on node 1
EAL: No available 2048 kB hugepages reported
EA= L: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) = device: 0000:af:00.0 (socket 1)
TELEMETRY: No legacy callbacks, legacy s= ocket not created
Interactive-mode selected
testpmd: create a new mbu= f pool <mb_pool_0>: n=3D779456, size=3D2176, socket=3D0
testpmd: p= referred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf poo= l <mb_pool_1>: n=3D779456, size=3D2176, socket=3D1
testpmd: prefer= red mempool ops selected: ring_mp_mc

Warning! port-topology=3Dpaired= and odd forward ports number, the last port will pair with itself.

= Configuring Port 0 (socket 1)
Port 0: EC:0D:9A:68:21:A8
Checking link= statuses...
Done
testpmd> show port summary all
Number of avai= lable ports: 1
Port MAC Address =C2=A0 =C2=A0 =C2=A0 Name =C2=A0 =C2=A0 = =C2=A0 =C2=A0 Driver =C2=A0 =C2=A0 =C2=A0 =C2=A0 Status =C2=A0 Link
0 = =C2=A0 =C2=A0EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci =C2=A0 =C2=A0 =C2=A0 u= p =C2=A0 =C2=A0 =C2=A0 100 Gbps

Best,
Aa= ron

On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net> wro= te:
21/02/2022 1= 9:52, Thomas Monjalon:
> 18/02/2022 22:12, Aaron Lee:
> > Hello,
> >
> > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 b= ut I'm
> > wondering if the card I have simply isn't compatible. I first= noticed that
> > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the= error
> > logs when running dpdk-pdump.
>
> When testing a NIC, it is more convenient to use dpdk-testpmd.
>
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7= 441297c92
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No su= ch file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp=
> > vdev_scan(): Failed to request vdev from primary
> > EAL: Selected IOVA mode 'PA'
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No su= ch file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp= _sync
> > EAL: Cannot request default VFIO container fd
> > EAL: VFIO support could not be initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 = (socket 1)
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No su= ch file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5= _mp
> > mlx5_common: port 0 request to primary process failed
> > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encounte= ring an
> > error: No such file or directory
> > mlx5_common: Failed to load driver mlx5_eth
> > EAL: Requested device 0000:af:00.0 cannot be used
> > EAL: Error - exiting with code: 1
> >=C2=A0 =C2=A0Cause: No Ethernet ports - bye
>
> From this log, we miss the previous steps before running the applicati= on.
>
> Please check these simple steps:
> - install rdma-core
> - build dpdk (meson build && ninja -C build)
> - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> - run testpmd (echo show port summary all | build/app/dpdk-testpmd --i= n-memory -- -i)
>
> EAL: Detected CPU lcores: 10
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Selected IOVA mode 'PA'
> EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 (sock= et 0)
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=3D219456, size=3D= 2176, socket=3D0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 0C:42:A1:D6:E0:00
> Checking link statuses...
> Done
> testpmd> show port summary all
> Number of available ports: 1
> Port MAC Address=C2=A0 =C2=A0 =C2=A0 =C2=A0Name=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0Driver=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Status=C2=A0 =C2=A0Lin= k
> 0=C2=A0 =C2=A0 0C:42:A1:D6:E0:00 08:00.0=C2=A0 =C2=A0 =C2=A0 mlx5_pci= =C2=A0 =C2=A0 =C2=A0 =C2=A0up=C2=A0 =C2=A0 =C2=A0 =C2=A025 Gbps
>
> > I noticed that the pci id of the card I was given is 15b3:1017 as= below.
> > This sort of indicates to me that the PMD driver isn't suppor= ted on this
> > card.
>
> This card is well supported and even officially tested with DPDK 21.11= ,
> as you can see in the release notes:
> https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A_= _doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatform= s&d=3DDwICAg&c=3D-35OiAkTchMrZOngvJPOeA&r=3DhV5L_ta1W9AMUIlRhnT= meA&m=3DSjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6y= O&s=3DioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=3D
>
> > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800= Family
> > [ConnectX-5] [15b3:1017]
> >
> > I'd appreciate it if someone has gotten this card to work wit= h DPDK to
> > point me in the right direction or if my suspicions were correct = that this
> > card doesn't work with the PMD.

If you want to check which hardware is supported by a PMD,
you can use this command:

usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so=C2=A0 =C2=A0 =C2= =A0
PMD NAME: mlx5_eth
PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
PMD HW SUPPORT:
=C2=A0Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (Al= l Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual Fun= ction] (1014) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) = (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual = Function] (1016) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (Al= l Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual Fun= ction] (1018) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) = (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual = Function] (101a) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX= -5 network controller (a2d2) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC famil= y VF (a2d3) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (Al= l Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual Fun= ction] (101c) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (= All Subdevices)
=C2=A0Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Functi= on (101e) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated Connect= X-6 Dx network controller (a2d6) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (= All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All= Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated Connect= X-7 network controller (a2dc) (All Subdevices)

> Please tell me what drove you into the wrong direction,
> because I really would like to improve the documentation & tools.<= br>


--0000000000002bc7e905d88c7af1--