DPDK usage discussions
 help / color / mirror / Atom feed
From: Aaron Lee <acl049@ucsd.edu>
To: Thomas Monjalon <thomas@monjalon.net>
Cc: users@dpdk.org, asafp@nvidia.com
Subject: Re: ConnectX5 Setup with DPDK
Date: Mon, 21 Feb 2022 12:10:08 -0800	[thread overview]
Message-ID: <CAPd2kkXk+jn_s9YRBcvLp3aYxBtYQF7K-qgg7cPEm_3-EapBdQ@mail.gmail.com> (raw)
In-Reply-To: <CAPd2kkU=fjzmYCazXvOj4-m+CHq5jFZgO0sLZnhWhpoeC3O4qQ@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 8535 bytes --]

Hi Thomas,

Actually I remembered in my previous setup I had run dpdk-devbind.py to
bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do
this and just wanted to confirm that this is correct.

Best,
Aaron

On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee <acl049@ucsd.edu> wrote:

> Hi Thomas,
>
> I tried installing things from scratch two days ago and have gotten
> things working! I think part of the problem was figuring out the correct
> hugepage allocation for my system. If I recall correctly, I tried setting
> up my system with default page size 1G but perhaps didn't have enough pages
> allocated at the time. Currently have the following which gives me the
> output you've shown previously.
>
> root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> Node Pages Size Total
> 0    16    1Gb    16Gb
> 1    16    1Gb    16Gb
>
> root@yeti-04:~/dpdk-21.11# echo show port summary all |
> build/app/dpdk-testpmd --in-memory -- -i
> EAL: Detected CPU lcores: 80
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Selected IOVA mode 'PA'
> EAL: No free 2048 kB hugepages reported on node 0
> EAL: No free 2048 kB hugepages reported on node 1
> EAL: No available 2048 kB hugepages reported
> EAL: VFIO support initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
> TELEMETRY: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=779456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mb_pool_1>: n=779456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
>
> Configuring Port 0 (socket 1)
> Port 0: EC:0D:9A:68:21:A8
> Checking link statuses...
> Done
> testpmd> show port summary all
> Number of available ports: 1
> Port MAC Address       Name         Driver         Status   Link
> 0    EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci       up       100 Gbps
>
> Best,
> Aaron
>
> On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net>
> wrote:
>
>> 21/02/2022 19:52, Thomas Monjalon:
>> > 18/02/2022 22:12, Aaron Lee:
>> > > Hello,
>> > >
>> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
>> > > wondering if the card I have simply isn't compatible. I first noticed
>> that
>> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the
>> error
>> > > logs when running dpdk-pdump.
>> >
>> > When testing a NIC, it is more convenient to use dpdk-testpmd.
>> >
>> > > EAL: Detected CPU lcores: 80
>> > > EAL: Detected NUMA nodes: 2
>> > > EAL: Detected static linkage of DPDK
>> > > EAL: Multi-process socket
>> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
>> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
>> file or
>> > > directory
>> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
>> > > vdev_scan(): Failed to request vdev from primary
>> > > EAL: Selected IOVA mode 'PA'
>> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
>> file or
>> > > directory
>> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
>> > > EAL: Cannot request default VFIO container fd
>> > > EAL: VFIO support could not be initialized
>> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0
>> (socket 1)
>> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
>> file or
>> > > directory
>> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
>> > > mlx5_common: port 0 request to primary process failed
>> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering
>> an
>> > > error: No such file or directory
>> > > mlx5_common: Failed to load driver mlx5_eth
>> > > EAL: Requested device 0000:af:00.0 cannot be used
>> > > EAL: Error - exiting with code: 1
>> > >   Cause: No Ethernet ports - bye
>> >
>> > From this log, we miss the previous steps before running the
>> application.
>> >
>> > Please check these simple steps:
>> > - install rdma-core
>> > - build dpdk (meson build && ninja -C build)
>> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
>> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd
>> --in-memory -- -i)
>> >
>> > EAL: Detected CPU lcores: 10
>> > EAL: Detected NUMA nodes: 1
>> > EAL: Detected static linkage of DPDK
>> > EAL: Selected IOVA mode 'PA'
>> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0
>> (socket 0)
>> > Interactive-mode selected
>> > testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176,
>> socket=0
>> > testpmd: preferred mempool ops selected: ring_mp_mc
>> > Configuring Port 0 (socket 0)
>> > Port 0: 0C:42:A1:D6:E0:00
>> > Checking link statuses...
>> > Done
>> > testpmd> show port summary all
>> > Number of available ports: 1
>> > Port MAC Address       Name         Driver         Status   Link
>> > 0    0C:42:A1:D6:E0:00 08:00.0      mlx5_pci       up       25 Gbps
>> >
>> > > I noticed that the pci id of the card I was given is 15b3:1017 as
>> below.
>> > > This sort of indicates to me that the PMD driver isn't supported on
>> this
>> > > card.
>> >
>> > This card is well supported and even officially tested with DPDK 21.11,
>> > as you can see in the release notes:
>> >
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=
>> >
>> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800
>> Family
>> > > [ConnectX-5] [15b3:1017]
>> > >
>> > > I'd appreciate it if someone has gotten this card to work with DPDK to
>> > > point me in the right direction or if my suspicions were correct that
>> this
>> > > card doesn't work with the PMD.
>>
>> If you want to check which hardware is supported by a PMD,
>> you can use this command:
>>
>> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so
>> PMD NAME: mlx5_eth
>> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
>> PMD HW SUPPORT:
>>  Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All
>> Subdevices)
>>  Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual
>> Function] (1014) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015)
>> (All Subdevices)
>>  Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual
>> Function] (1016) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All
>> Subdevices)
>>  Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual
>> Function] (1018) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019)
>> (All Subdevices)
>>  Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual
>> Function] (101a) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5
>> network controller (a2d2) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family
>> VF (a2d3) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All
>> Subdevices)
>>  Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual
>> Function] (101c) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All
>> Subdevices)
>>  Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function
>> (101e) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6
>> Dx network controller (a2d6) (All Subdevices)
>>  Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All
>> Subdevices)
>>  Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All
>> Subdevices)
>>  Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7
>> network controller (a2dc) (All Subdevices)
>>
>> > Please tell me what drove you into the wrong direction,
>> > because I really would like to improve the documentation & tools.
>>
>>
>>
>>

[-- Attachment #2: Type: text/html, Size: 10155 bytes --]

  reply	other threads:[~2022-02-21 20:10 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-18 21:12 Aaron Lee
2022-02-21 18:52 ` Thomas Monjalon
2022-02-21 19:03   ` Thomas Monjalon
2022-02-21 19:45     ` Aaron Lee
2022-02-21 20:10       ` Aaron Lee [this message]
2022-02-22  7:10         ` Thomas Monjalon
2022-02-25 18:29           ` Aaron Lee
2022-02-25 23:13             ` Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAPd2kkXk+jn_s9YRBcvLp3aYxBtYQF7K-qgg7cPEm_3-EapBdQ@mail.gmail.com \
    --to=acl049@ucsd.edu \
    --cc=asafp@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).