From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B9CCFA034F for ; Mon, 21 Feb 2022 21:10:23 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 47A444068C; Mon, 21 Feb 2022 21:10:23 +0100 (CET) Received: from mx0b-0016e101.pphosted.com (mx0b-0016e101.pphosted.com [148.163.141.31]) by mails.dpdk.org (Postfix) with ESMTP id 0B4DE4013F for ; Mon, 21 Feb 2022 21:10:21 +0100 (CET) Received: from pps.filterd (m0151357.ppops.net [127.0.0.1]) by mx0b-0016e101.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21LJTsjg020790 for ; Mon, 21 Feb 2022 12:10:21 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsd.edu; h=mime-version : references : in-reply-to : from : date : message-id : subject : to : cc : content-type; s=campus; bh=bupzc+0ZNumkzZikDOTDzGGtZNePWQEuUivKoAw16qU=; b=cterBUXWq65GphkzfdBPyZsh8YDmsx5dbY+gYIytswUHrbs4rkBCuF9DhLNDDTudf94Z +ZaV+uS4fNucuI2tNqgunfi2KOPfFtyzk7GgJkNe1wQomWeuBoPjB/1OViYCwQXDcw8c 44TQ4ZzxtqtaNTY+p/zJNn8RncK1JPTyustos2Xn7yHbG3Z2qcwXXe+hVAUSnYzbmn9W txfsVxOT3VsRpaYPjuYuHvlVe2KX/gCYaCUK2W6EKcj8jtxE0mQM8LueY73WTu7l/kbf Sq95o8GM6CapDyY245GmCxGqdutE8nRTouhkRBY9sFlWjcZdDvmf+uJd+VP+oX5aTpPv tg== Received: from mail-pg1-f197.google.com (mail-pg1-f197.google.com [209.85.215.197]) by mx0b-0016e101.pphosted.com (PPS) with ESMTPS id 3ebh76m55q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128 verify=NOT) for ; Mon, 21 Feb 2022 12:10:20 -0800 Received: by mail-pg1-f197.google.com with SMTP id v4-20020a63f844000000b003745fd0919aso1368809pgj.20 for ; Mon, 21 Feb 2022 12:10:20 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ucsd.edu; s=google; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=bupzc+0ZNumkzZikDOTDzGGtZNePWQEuUivKoAw16qU=; b=ONV0ISXld3R5MGXzx/mj8XNsV6jFNgQUzuJkubfaghulyIfsEYIKDd+L/7VrCWJESi xHcyaB53gZfJcMlQW20F3FjfSx5X46LTy68qhSOvDKzzNqjgilHN9kFVh2wjNbh3uDAy fJrujxyc9DndVt23q3QtpuFUT5eOL3sgHjOVk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=bupzc+0ZNumkzZikDOTDzGGtZNePWQEuUivKoAw16qU=; b=Kl4BlzjHfFO1EkGNigvBpiu1mQDvT14LhYA8eMlkIGMKrtb5rxujl9tDkdnPo+2VyS sNi+x5KBUDUbqNGqn2SxFja/wVHKR62wwsNdZWFJtApC66LEX/fMht6/gC055XTJxKVz 5A+Brw7RAgMjCpQ8iz8iBr1X9jRnFlVXzTSheWTKiDpol4HF3g+fbxkKbuNfJK2yuplq vpJgC0XGjSnY/b3TGOe6nPXssVUPPuagqVRYMseWH4xQQ1uhcYACkgH3pScKKCnlS/xN 9LT6HSpZwM8t0GDeNvqPsVRcM9s0ufBnAPag8JVE9CHCivpcO/n+bsKtK85N95bcQbjn BitQ== X-Gm-Message-State: AOAM531zoNYRHAq9LJolIDPsCh9ySMRPRX+nbWxP7sgHG1wOzsoddmzb q2mkNiGuUE8TG+TmiHk8Z1Z5dtGHOLwmErlM01YZLYehgVYgFaPStAA8CCxwOtqEM60shP1sBbO MVdvGhU2eBIWk5BNL/HvP X-Received: by 2002:a17:902:b607:b0:14d:c39c:a37 with SMTP id b7-20020a170902b60700b0014dc39c0a37mr20613204pls.84.1645474219514; Mon, 21 Feb 2022 12:10:19 -0800 (PST) X-Google-Smtp-Source: ABdhPJyiNWm/GpZ9HDMA/cq1NkTkH1Dlg6sx8jPzI5XODTQ7g9FtdJJQ9qrvcvx5LWZwyk4CiBrdJhEaCiywWpstGX4= X-Received: by 2002:a17:902:b607:b0:14d:c39c:a37 with SMTP id b7-20020a170902b60700b0014dc39c0a37mr20613183pls.84.1645474219052; Mon, 21 Feb 2022 12:10:19 -0800 (PST) MIME-Version: 1.0 References: <1707201.A0I09U8b9p@thomas> <3339719.uBEoKPz9u1@thomas> In-Reply-To: From: Aaron Lee Date: Mon, 21 Feb 2022 12:10:08 -0800 Message-ID: Subject: Re: ConnectX5 Setup with DPDK To: Thomas Monjalon Cc: users@dpdk.org, asafp@nvidia.com Content-Type: multipart/alternative; boundary="0000000000005068bb05d88cd2c0" X-campus_gsuite: gsuite_33445511 X-Proofpoint-ORIG-GUID: e_fTR2T8_m2hpBIsqozzBKUcGs4e0FVO X-Proofpoint-GUID: e_fTR2T8_m2hpBIsqozzBKUcGs4e0FVO pp_allow_relay: proofpoint_allowed X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-21_09,2022-02-21_02,2021-12-02_01 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 impostorscore=0 lowpriorityscore=0 clxscore=1015 adultscore=0 spamscore=0 bulkscore=0 malwarescore=0 suspectscore=0 mlxlogscore=999 mlxscore=0 priorityscore=1501 phishscore=0 classifier=spam adjust=-80 reason=mlx scancount=1 engine=8.12.0-2201110000 definitions=main-2202210120 X-BeenThere: users@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK usage discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: users-bounces@dpdk.org --0000000000005068bb05d88cd2c0 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi Thomas, Actually I remembered in my previous setup I had run dpdk-devbind.py to bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do this and just wanted to confirm that this is correct. Best, Aaron On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee wrote: > Hi Thomas, > > I tried installing things from scratch two days ago and have gotten > things working! I think part of the problem was figuring out the correct > hugepage allocation for my system. If I recall correctly, I tried setting > up my system with default page size 1G but perhaps didn't have enough pag= es > allocated at the time. Currently have the following which gives me the > output you've shown previously. > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s > Node Pages Size Total > 0 16 1Gb 16Gb > 1 16 1Gb 16Gb > > root@yeti-04:~/dpdk-21.11# echo show port summary all | > build/app/dpdk-testpmd --in-memory -- -i > EAL: Detected CPU lcores: 80 > EAL: Detected NUMA nodes: 2 > EAL: Detected static linkage of DPDK > EAL: Selected IOVA mode 'PA' > EAL: No free 2048 kB hugepages reported on node 0 > EAL: No free 2048 kB hugepages reported on node 1 > EAL: No available 2048 kB hugepages reported > EAL: VFIO support initialized > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket = 1) > TELEMETRY: No legacy callbacks, legacy socket not created > Interactive-mode selected > testpmd: create a new mbuf pool : n=3D779456, size=3D2176, soc= ket=3D0 > testpmd: preferred mempool ops selected: ring_mp_mc > testpmd: create a new mbuf pool : n=3D779456, size=3D2176, soc= ket=3D1 > testpmd: preferred mempool ops selected: ring_mp_mc > > Warning! port-topology=3Dpaired and odd forward ports number, the last po= rt > will pair with itself. > > Configuring Port 0 (socket 1) > Port 0: EC:0D:9A:68:21:A8 > Checking link statuses... > Done > testpmd> show port summary all > Number of available ports: 1 > Port MAC Address Name Driver Status Link > 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps > > Best, > Aaron > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon > wrote: > >> 21/02/2022 19:52, Thomas Monjalon: >> > 18/02/2022 22:12, Aaron Lee: >> > > Hello, >> > > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm >> > > wondering if the card I have simply isn't compatible. I first notice= d >> that >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the >> error >> > > logs when running dpdk-pdump. >> > >> > When testing a NIC, it is more convenient to use dpdk-testpmd. >> > >> > > EAL: Detected CPU lcores: 80 >> > > EAL: Detected NUMA nodes: 2 >> > > EAL: Detected static linkage of DPDK >> > > EAL: Multi-process socket >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92 >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp >> > > vdev_scan(): Failed to request vdev from primary >> > > EAL: Selected IOVA mode 'PA' >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sy= nc >> > > EAL: Cannot request default VFIO container fd >> > > EAL: VFIO support could not be initialized >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 >> (socket 1) >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such >> file or >> > > directory >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp >> > > mlx5_common: port 0 request to primary process failed >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encounterin= g >> an >> > > error: No such file or directory >> > > mlx5_common: Failed to load driver mlx5_eth >> > > EAL: Requested device 0000:af:00.0 cannot be used >> > > EAL: Error - exiting with code: 1 >> > > Cause: No Ethernet ports - bye >> > >> > From this log, we miss the previous steps before running the >> application. >> > >> > Please check these simple steps: >> > - install rdma-core >> > - build dpdk (meson build && ninja -C build) >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G) >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd >> --in-memory -- -i) >> > >> > EAL: Detected CPU lcores: 10 >> > EAL: Detected NUMA nodes: 1 >> > EAL: Detected static linkage of DPDK >> > EAL: Selected IOVA mode 'PA' >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 >> (socket 0) >> > Interactive-mode selected >> > testpmd: create a new mbuf pool : n=3D219456, size=3D2176, >> socket=3D0 >> > testpmd: preferred mempool ops selected: ring_mp_mc >> > Configuring Port 0 (socket 0) >> > Port 0: 0C:42:A1:D6:E0:00 >> > Checking link statuses... >> > Done >> > testpmd> show port summary all >> > Number of available ports: 1 >> > Port MAC Address Name Driver Status Link >> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps >> > >> > > I noticed that the pci id of the card I was given is 15b3:1017 as >> below. >> > > This sort of indicates to me that the PMD driver isn't supported on >> this >> > > card. >> > >> > This card is well supported and even officially tested with DPDK 21.11= , >> > as you can see in the release notes: >> > >> https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A__doc.dpdk.org_guid= es_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=3DDwICAg&c=3D-= 35OiAkTchMrZOngvJPOeA&r=3DhV5L_ta1W9AMUIlRhnTmeA&m=3DSjlw2sMlSxVzIY1zsNBhZu= eu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=3DioqgYPqVWdF2XE0iOZ4AZn5Vw_NG= mtr5m9fYCf_TY9A&e=3D >> > >> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 >> Family >> > > [ConnectX-5] [15b3:1017] >> > > >> > > I'd appreciate it if someone has gotten this card to work with DPDK = to >> > > point me in the right direction or if my suspicions were correct tha= t >> this >> > > card doesn't work with the PMD. >> >> If you want to check which hardware is supported by a PMD, >> you can use this command: >> >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so >> PMD NAME: mlx5_eth >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib >> PMD HW SUPPORT: >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual >> Function] (1014) (All Subdevices) >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) >> (All Subdevices) >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual >> Function] (1016) (All Subdevices) >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual >> Function] (1018) (All Subdevices) >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) >> (All Subdevices) >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual >> Function] (101a) (All Subdevices) >> Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 >> network controller (a2d2) (All Subdevices) >> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family >> VF (a2d3) (All Subdevices) >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual >> Function] (101c) (All Subdevices) >> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (Al= l >> Subdevices) >> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function >> (101e) (All Subdevices) >> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-= 6 >> Dx network controller (a2d6) (All Subdevices) >> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (Al= l >> Subdevices) >> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All >> Subdevices) >> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-= 7 >> network controller (a2dc) (All Subdevices) >> >> > Please tell me what drove you into the wrong direction, >> > because I really would like to improve the documentation & tools. >> >> >> >> --0000000000005068bb05d88cd2c0 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi Thomas,

Actually I remembered in my = previous setup I had run dpdk-devbind.py to bind the mlx5 NIC to igb_uio. I= read somewhere that you don't need to do this and just wanted to confi= rm that this is correct.

Best,
Aaron

On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee <acl049@ucsd.edu> wrote:
Hi Thomas,
I tried installing things from scratch two days ago and have g= otten things=C2=A0working! I think part of the problem was figuring out the= correct hugepage allocation for my system. If I recall=C2=A0correctly, I t= ried setting up my system with default page size 1G but perhaps didn't = have enough pages allocated at the time. Currently have the following which= gives me the output you've shown previously.

= root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
Node Pages Siz= e Total
0 =C2=A0 =C2=A016 =C2=A0 =C2=A01Gb =C2=A0 =C2=A016Gb
1 =C2=A0= =C2=A016 =C2=A0 =C2=A01Gb =C2=A0 =C2=A016Gb

r= oot@yeti-04:~/dpdk-21.11# echo show port summary all | build/app/dpdk-testp= md --in-memory -- -i
EAL: Detected CPU lcores: 80
EAL: Detected NUMA = nodes: 2
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode= 'PA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: = No free 2048 kB hugepages reported on node 1
EAL: No available 2048 kB h= ugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver= : mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
TELEMETRY: No leg= acy callbacks, legacy socket not created
Interactive-mode selected
te= stpmd: create a new mbuf pool <mb_pool_0>: n=3D779456, size=3D2176, s= ocket=3D0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd= : create a new mbuf pool <mb_pool_1>: n=3D779456, size=3D2176, socket= =3D1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning!= port-topology=3Dpaired and odd forward ports number, the last port will pa= ir with itself.

Configuring Port 0 (socket 1)
Port 0: EC:0D:9A:68= :21:A8
Checking link statuses...
Done
testpmd> show port summar= y all
Number of available ports: 1
Port MAC Address =C2=A0 =C2=A0 =C2= =A0 Name =C2=A0 =C2=A0 =C2=A0 =C2=A0 Driver =C2=A0 =C2=A0 =C2=A0 =C2=A0 Sta= tus =C2=A0 Link
0 =C2=A0 =C2=A0EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci = =C2=A0 =C2=A0 =C2=A0 up =C2=A0 =C2=A0 =C2=A0 100 Gbps

<= div>Best,
Aaron

On Mon, Feb 21, 2022 at 11:03 AM Thomas Monj= alon <thomas@mo= njalon.net> wrote:
21/02/2022 19:52, Thomas Monjalon:
> 18/02/2022 22:12, Aaron Lee:
> > Hello,
> >
> > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 b= ut I'm
> > wondering if the card I have simply isn't compatible. I first= noticed that
> > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the= error
> > logs when running dpdk-pdump.
>
> When testing a NIC, it is more convenient to use dpdk-testpmd.
>
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7= 441297c92
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No su= ch file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp=
> > vdev_scan(): Failed to request vdev from primary
> > EAL: Selected IOVA mode 'PA'
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No su= ch file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp= _sync
> > EAL: Cannot request default VFIO container fd
> > EAL: VFIO support could not be initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 = (socket 1)
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No su= ch file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5= _mp
> > mlx5_common: port 0 request to primary process failed
> > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encounte= ring an
> > error: No such file or directory
> > mlx5_common: Failed to load driver mlx5_eth
> > EAL: Requested device 0000:af:00.0 cannot be used
> > EAL: Error - exiting with code: 1
> >=C2=A0 =C2=A0Cause: No Ethernet ports - bye
>
> From this log, we miss the previous steps before running the applicati= on.
>
> Please check these simple steps:
> - install rdma-core
> - build dpdk (meson build && ninja -C build)
> - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> - run testpmd (echo show port summary all | build/app/dpdk-testpmd --i= n-memory -- -i)
>
> EAL: Detected CPU lcores: 10
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Selected IOVA mode 'PA'
> EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 (sock= et 0)
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=3D219456, size=3D= 2176, socket=3D0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 0C:42:A1:D6:E0:00
> Checking link statuses...
> Done
> testpmd> show port summary all
> Number of available ports: 1
> Port MAC Address=C2=A0 =C2=A0 =C2=A0 =C2=A0Name=C2=A0 =C2=A0 =C2=A0 = =C2=A0 =C2=A0Driver=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0Status=C2=A0 =C2=A0Lin= k
> 0=C2=A0 =C2=A0 0C:42:A1:D6:E0:00 08:00.0=C2=A0 =C2=A0 =C2=A0 mlx5_pci= =C2=A0 =C2=A0 =C2=A0 =C2=A0up=C2=A0 =C2=A0 =C2=A0 =C2=A025 Gbps
>
> > I noticed that the pci id of the card I was given is 15b3:1017 as= below.
> > This sort of indicates to me that the PMD driver isn't suppor= ted on this
> > card.
>
> This card is well supported and even officially tested with DPDK 21.11= ,
> as you can see in the release notes:
> https://urldefense.proofpoint.com/v2/url?u=3Dhttps-3A_= _doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatform= s&d=3DDwICAg&c=3D-35OiAkTchMrZOngvJPOeA&r=3DhV5L_ta1W9AMUIlRhnT= meA&m=3DSjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6y= O&s=3DioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=3D
>
> > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800= Family
> > [ConnectX-5] [15b3:1017]
> >
> > I'd appreciate it if someone has gotten this card to work wit= h DPDK to
> > point me in the right direction or if my suspicions were correct = that this
> > card doesn't work with the PMD.

If you want to check which hardware is supported by a PMD,
you can use this command:

usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so=C2=A0 =C2=A0 =C2= =A0
PMD NAME: mlx5_eth
PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
PMD HW SUPPORT:
=C2=A0Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (Al= l Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual Fun= ction] (1014) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) = (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual = Function] (1016) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (Al= l Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual Fun= ction] (1018) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) = (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual = Function] (101a) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX= -5 network controller (a2d2) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC famil= y VF (a2d3) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (Al= l Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual Fun= ction] (101c) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (= All Subdevices)
=C2=A0Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Functi= on (101e) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated Connect= X-6 Dx network controller (a2d6) (All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (= All Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All= Subdevices)
=C2=A0Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated Connect= X-7 network controller (a2dc) (All Subdevices)

> Please tell me what drove you into the wrong direction,
> because I really would like to improve the documentation & tools.<= br>


--0000000000005068bb05d88cd2c0--