* ConnectX5 Setup with DPDK
@ 2022-02-18 21:12 Aaron Lee
2022-02-21 18:52 ` Thomas Monjalon
0 siblings, 1 reply; 8+ messages in thread
From: Aaron Lee @ 2022-02-18 21:12 UTC (permalink / raw)
To: users
[-- Attachment #1: Type: text/plain, Size: 1895 bytes --]
Hello,
I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
wondering if the card I have simply isn't compatible. I first noticed that
the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error
logs when running dpdk-pdump.
EAL: Detected CPU lcores: 80
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
directory
EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
vdev_scan(): Failed to request vdev from primary
EAL: Selected IOVA mode 'PA'
EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
directory
EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
EAL: Cannot request default VFIO container fd
EAL: VFIO support could not be initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
directory
EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
mlx5_common: port 0 request to primary process failed
mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering an
error: No such file or directory
mlx5_common: Failed to load driver mlx5_eth
EAL: Requested device 0000:af:00.0 cannot be used
EAL: Error - exiting with code: 1
Cause: No Ethernet ports - bye
I noticed that the pci id of the card I was given is 15b3:1017 as below.
This sort of indicates to me that the PMD driver isn't supported on this
card.
af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family
[ConnectX-5] [15b3:1017]
I'd appreciate it if someone has gotten this card to work with DPDK to
point me in the right direction or if my suspicions were correct that this
card doesn't work with the PMD.
Best,
Aaron
[-- Attachment #2: Type: text/html, Size: 2175 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-18 21:12 ConnectX5 Setup with DPDK Aaron Lee
@ 2022-02-21 18:52 ` Thomas Monjalon
2022-02-21 19:03 ` Thomas Monjalon
0 siblings, 1 reply; 8+ messages in thread
From: Thomas Monjalon @ 2022-02-21 18:52 UTC (permalink / raw)
To: Aaron Lee; +Cc: users, asafp
18/02/2022 22:12, Aaron Lee:
> Hello,
>
> I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> wondering if the card I have simply isn't compatible. I first noticed that
> the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error
> logs when running dpdk-pdump.
When testing a NIC, it is more convenient to use dpdk-testpmd.
> EAL: Detected CPU lcores: 80
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> directory
> EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> vdev_scan(): Failed to request vdev from primary
> EAL: Selected IOVA mode 'PA'
> EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> directory
> EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> EAL: Cannot request default VFIO container fd
> EAL: VFIO support could not be initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
> EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> directory
> EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> mlx5_common: port 0 request to primary process failed
> mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering an
> error: No such file or directory
> mlx5_common: Failed to load driver mlx5_eth
> EAL: Requested device 0000:af:00.0 cannot be used
> EAL: Error - exiting with code: 1
> Cause: No Ethernet ports - bye
From this log, we miss the previous steps before running the application.
Please check these simple steps:
- install rdma-core
- build dpdk (meson build && ninja -C build)
- reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
- run testpmd (echo show port summary all | build/app/dpdk-testpmd --in-memory -- -i)
EAL: Detected CPU lcores: 10
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode 'PA'
EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 (socket 0)
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 0C:42:A1:D6:E0:00
Checking link statuses...
Done
testpmd> show port summary all
Number of available ports: 1
Port MAC Address Name Driver Status Link
0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps
> I noticed that the pci id of the card I was given is 15b3:1017 as below.
> This sort of indicates to me that the PMD driver isn't supported on this
> card.
This card is well supported and even officially tested with DPDK 21.11,
as you can see in the release notes:
https://doc.dpdk.org/guides/rel_notes/release_21_11.html#tested-platforms
> af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family
> [ConnectX-5] [15b3:1017]
>
> I'd appreciate it if someone has gotten this card to work with DPDK to
> point me in the right direction or if my suspicions were correct that this
> card doesn't work with the PMD.
Please tell me what drove you into the wrong direction,
because I really would like to improve the documentation & tools.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-21 18:52 ` Thomas Monjalon
@ 2022-02-21 19:03 ` Thomas Monjalon
2022-02-21 19:45 ` Aaron Lee
0 siblings, 1 reply; 8+ messages in thread
From: Thomas Monjalon @ 2022-02-21 19:03 UTC (permalink / raw)
To: Aaron Lee; +Cc: users, asafp
21/02/2022 19:52, Thomas Monjalon:
> 18/02/2022 22:12, Aaron Lee:
> > Hello,
> >
> > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> > wondering if the card I have simply isn't compatible. I first noticed that
> > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error
> > logs when running dpdk-pdump.
>
> When testing a NIC, it is more convenient to use dpdk-testpmd.
>
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> > vdev_scan(): Failed to request vdev from primary
> > EAL: Selected IOVA mode 'PA'
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> > EAL: Cannot request default VFIO container fd
> > EAL: VFIO support could not be initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
> > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such file or
> > directory
> > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> > mlx5_common: port 0 request to primary process failed
> > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering an
> > error: No such file or directory
> > mlx5_common: Failed to load driver mlx5_eth
> > EAL: Requested device 0000:af:00.0 cannot be used
> > EAL: Error - exiting with code: 1
> > Cause: No Ethernet ports - bye
>
> From this log, we miss the previous steps before running the application.
>
> Please check these simple steps:
> - install rdma-core
> - build dpdk (meson build && ninja -C build)
> - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> - run testpmd (echo show port summary all | build/app/dpdk-testpmd --in-memory -- -i)
>
> EAL: Detected CPU lcores: 10
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Selected IOVA mode 'PA'
> EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 (socket 0)
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> Port 0: 0C:42:A1:D6:E0:00
> Checking link statuses...
> Done
> testpmd> show port summary all
> Number of available ports: 1
> Port MAC Address Name Driver Status Link
> 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps
>
> > I noticed that the pci id of the card I was given is 15b3:1017 as below.
> > This sort of indicates to me that the PMD driver isn't supported on this
> > card.
>
> This card is well supported and even officially tested with DPDK 21.11,
> as you can see in the release notes:
> https://doc.dpdk.org/guides/rel_notes/release_21_11.html#tested-platforms
>
> > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800 Family
> > [ConnectX-5] [15b3:1017]
> >
> > I'd appreciate it if someone has gotten this card to work with DPDK to
> > point me in the right direction or if my suspicions were correct that this
> > card doesn't work with the PMD.
If you want to check which hardware is supported by a PMD,
you can use this command:
usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so
PMD NAME: mlx5_eth
PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
PMD HW SUPPORT:
Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All Subdevices)
Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual Function] (1014) (All Subdevices)
Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) (All Subdevices)
Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual Function] (1016) (All Subdevices)
Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All Subdevices)
Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual Function] (1018) (All Subdevices)
Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) (All Subdevices)
Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual Function] (101a) (All Subdevices)
Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5 network controller (a2d2) (All Subdevices)
Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family VF (a2d3) (All Subdevices)
Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All Subdevices)
Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual Function] (101c) (All Subdevices)
Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All Subdevices)
Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function (101e) (All Subdevices)
Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6 Dx network controller (a2d6) (All Subdevices)
Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All Subdevices)
Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All Subdevices)
Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7 network controller (a2dc) (All Subdevices)
> Please tell me what drove you into the wrong direction,
> because I really would like to improve the documentation & tools.
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-21 19:03 ` Thomas Monjalon
@ 2022-02-21 19:45 ` Aaron Lee
2022-02-21 20:10 ` Aaron Lee
0 siblings, 1 reply; 8+ messages in thread
From: Aaron Lee @ 2022-02-21 19:45 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: users, asafp
[-- Attachment #1: Type: text/plain, Size: 7988 bytes --]
Hi Thomas,
I tried installing things from scratch two days ago and have gotten
things working! I think part of the problem was figuring out the correct
hugepage allocation for my system. If I recall correctly, I tried setting
up my system with default page size 1G but perhaps didn't have enough pages
allocated at the time. Currently have the following which gives me the
output you've shown previously.
root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
Node Pages Size Total
0 16 1Gb 16Gb
1 16 1Gb 16Gb
root@yeti-04:~/dpdk-21.11# echo show port summary all |
build/app/dpdk-testpmd --in-memory -- -i
EAL: Detected CPU lcores: 80
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Selected IOVA mode 'PA'
EAL: No free 2048 kB hugepages reported on node 0
EAL: No free 2048 kB hugepages reported on node 1
EAL: No available 2048 kB hugepages reported
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=779456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=779456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.
Configuring Port 0 (socket 1)
Port 0: EC:0D:9A:68:21:A8
Checking link statuses...
Done
testpmd> show port summary all
Number of available ports: 1
Port MAC Address Name Driver Status Link
0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps
Best,
Aaron
On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net>
wrote:
> 21/02/2022 19:52, Thomas Monjalon:
> > 18/02/2022 22:12, Aaron Lee:
> > > Hello,
> > >
> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> > > wondering if the card I have simply isn't compatible. I first noticed
> that
> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the error
> > > logs when running dpdk-pdump.
> >
> > When testing a NIC, it is more convenient to use dpdk-testpmd.
> >
> > > EAL: Detected CPU lcores: 80
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Detected static linkage of DPDK
> > > EAL: Multi-process socket
> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> file or
> > > directory
> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> > > vdev_scan(): Failed to request vdev from primary
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> file or
> > > directory
> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> > > EAL: Cannot request default VFIO container fd
> > > EAL: VFIO support could not be initialized
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0
> (socket 1)
> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> file or
> > > directory
> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> > > mlx5_common: port 0 request to primary process failed
> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering
> an
> > > error: No such file or directory
> > > mlx5_common: Failed to load driver mlx5_eth
> > > EAL: Requested device 0000:af:00.0 cannot be used
> > > EAL: Error - exiting with code: 1
> > > Cause: No Ethernet ports - bye
> >
> > From this log, we miss the previous steps before running the application.
> >
> > Please check these simple steps:
> > - install rdma-core
> > - build dpdk (meson build && ninja -C build)
> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd
> --in-memory -- -i)
> >
> > EAL: Detected CPU lcores: 10
> > EAL: Detected NUMA nodes: 1
> > EAL: Detected static linkage of DPDK
> > EAL: Selected IOVA mode 'PA'
> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0 (socket
> 0)
> > Interactive-mode selected
> > testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176,
> socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > Configuring Port 0 (socket 0)
> > Port 0: 0C:42:A1:D6:E0:00
> > Checking link statuses...
> > Done
> > testpmd> show port summary all
> > Number of available ports: 1
> > Port MAC Address Name Driver Status Link
> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps
> >
> > > I noticed that the pci id of the card I was given is 15b3:1017 as
> below.
> > > This sort of indicates to me that the PMD driver isn't supported on
> this
> > > card.
> >
> > This card is well supported and even officially tested with DPDK 21.11,
> > as you can see in the release notes:
> >
> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=
> >
> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800
> Family
> > > [ConnectX-5] [15b3:1017]
> > >
> > > I'd appreciate it if someone has gotten this card to work with DPDK to
> > > point me in the right direction or if my suspicions were correct that
> this
> > > card doesn't work with the PMD.
>
> If you want to check which hardware is supported by a PMD,
> you can use this command:
>
> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so
> PMD NAME: mlx5_eth
> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
> PMD HW SUPPORT:
> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual
> Function] (1014) (All Subdevices)
> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual
> Function] (1016) (All Subdevices)
> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual
> Function] (1018) (All Subdevices)
> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual
> Function] (101a) (All Subdevices)
> Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5
> network controller (a2d2) (All Subdevices)
> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family VF
> (a2d3) (All Subdevices)
> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual
> Function] (101c) (All Subdevices)
> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All
> Subdevices)
> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function
> (101e) (All Subdevices)
> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6
> Dx network controller (a2d6) (All Subdevices)
> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All
> Subdevices)
> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7
> network controller (a2dc) (All Subdevices)
>
> > Please tell me what drove you into the wrong direction,
> > because I really would like to improve the documentation & tools.
>
>
>
>
[-- Attachment #2: Type: text/html, Size: 9524 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-21 19:45 ` Aaron Lee
@ 2022-02-21 20:10 ` Aaron Lee
2022-02-22 7:10 ` Thomas Monjalon
0 siblings, 1 reply; 8+ messages in thread
From: Aaron Lee @ 2022-02-21 20:10 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: users, asafp
[-- Attachment #1: Type: text/plain, Size: 8535 bytes --]
Hi Thomas,
Actually I remembered in my previous setup I had run dpdk-devbind.py to
bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do
this and just wanted to confirm that this is correct.
Best,
Aaron
On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee <acl049@ucsd.edu> wrote:
> Hi Thomas,
>
> I tried installing things from scratch two days ago and have gotten
> things working! I think part of the problem was figuring out the correct
> hugepage allocation for my system. If I recall correctly, I tried setting
> up my system with default page size 1G but perhaps didn't have enough pages
> allocated at the time. Currently have the following which gives me the
> output you've shown previously.
>
> root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> Node Pages Size Total
> 0 16 1Gb 16Gb
> 1 16 1Gb 16Gb
>
> root@yeti-04:~/dpdk-21.11# echo show port summary all |
> build/app/dpdk-testpmd --in-memory -- -i
> EAL: Detected CPU lcores: 80
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Selected IOVA mode 'PA'
> EAL: No free 2048 kB hugepages reported on node 0
> EAL: No free 2048 kB hugepages reported on node 1
> EAL: No available 2048 kB hugepages reported
> EAL: VFIO support initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
> TELEMETRY: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=779456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mb_pool_1>: n=779456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
>
> Configuring Port 0 (socket 1)
> Port 0: EC:0D:9A:68:21:A8
> Checking link statuses...
> Done
> testpmd> show port summary all
> Number of available ports: 1
> Port MAC Address Name Driver Status Link
> 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps
>
> Best,
> Aaron
>
> On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net>
> wrote:
>
>> 21/02/2022 19:52, Thomas Monjalon:
>> > 18/02/2022 22:12, Aaron Lee:
>> > > Hello,
>> > >
>> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
>> > > wondering if the card I have simply isn't compatible. I first noticed
>> that
>> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the
>> error
>> > > logs when running dpdk-pdump.
>> >
>> > When testing a NIC, it is more convenient to use dpdk-testpmd.
>> >
>> > > EAL: Detected CPU lcores: 80
>> > > EAL: Detected NUMA nodes: 2
>> > > EAL: Detected static linkage of DPDK
>> > > EAL: Multi-process socket
>> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
>> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
>> file or
>> > > directory
>> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
>> > > vdev_scan(): Failed to request vdev from primary
>> > > EAL: Selected IOVA mode 'PA'
>> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
>> file or
>> > > directory
>> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
>> > > EAL: Cannot request default VFIO container fd
>> > > EAL: VFIO support could not be initialized
>> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0
>> (socket 1)
>> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
>> file or
>> > > directory
>> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
>> > > mlx5_common: port 0 request to primary process failed
>> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering
>> an
>> > > error: No such file or directory
>> > > mlx5_common: Failed to load driver mlx5_eth
>> > > EAL: Requested device 0000:af:00.0 cannot be used
>> > > EAL: Error - exiting with code: 1
>> > > Cause: No Ethernet ports - bye
>> >
>> > From this log, we miss the previous steps before running the
>> application.
>> >
>> > Please check these simple steps:
>> > - install rdma-core
>> > - build dpdk (meson build && ninja -C build)
>> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
>> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd
>> --in-memory -- -i)
>> >
>> > EAL: Detected CPU lcores: 10
>> > EAL: Detected NUMA nodes: 1
>> > EAL: Detected static linkage of DPDK
>> > EAL: Selected IOVA mode 'PA'
>> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0
>> (socket 0)
>> > Interactive-mode selected
>> > testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176,
>> socket=0
>> > testpmd: preferred mempool ops selected: ring_mp_mc
>> > Configuring Port 0 (socket 0)
>> > Port 0: 0C:42:A1:D6:E0:00
>> > Checking link statuses...
>> > Done
>> > testpmd> show port summary all
>> > Number of available ports: 1
>> > Port MAC Address Name Driver Status Link
>> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps
>> >
>> > > I noticed that the pci id of the card I was given is 15b3:1017 as
>> below.
>> > > This sort of indicates to me that the PMD driver isn't supported on
>> this
>> > > card.
>> >
>> > This card is well supported and even officially tested with DPDK 21.11,
>> > as you can see in the release notes:
>> >
>> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=
>> >
>> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800
>> Family
>> > > [ConnectX-5] [15b3:1017]
>> > >
>> > > I'd appreciate it if someone has gotten this card to work with DPDK to
>> > > point me in the right direction or if my suspicions were correct that
>> this
>> > > card doesn't work with the PMD.
>>
>> If you want to check which hardware is supported by a PMD,
>> you can use this command:
>>
>> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so
>> PMD NAME: mlx5_eth
>> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
>> PMD HW SUPPORT:
>> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All
>> Subdevices)
>> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual
>> Function] (1014) (All Subdevices)
>> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015)
>> (All Subdevices)
>> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual
>> Function] (1016) (All Subdevices)
>> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All
>> Subdevices)
>> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual
>> Function] (1018) (All Subdevices)
>> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019)
>> (All Subdevices)
>> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual
>> Function] (101a) (All Subdevices)
>> Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5
>> network controller (a2d2) (All Subdevices)
>> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family
>> VF (a2d3) (All Subdevices)
>> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All
>> Subdevices)
>> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual
>> Function] (101c) (All Subdevices)
>> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All
>> Subdevices)
>> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function
>> (101e) (All Subdevices)
>> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6
>> Dx network controller (a2d6) (All Subdevices)
>> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All
>> Subdevices)
>> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All
>> Subdevices)
>> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7
>> network controller (a2dc) (All Subdevices)
>>
>> > Please tell me what drove you into the wrong direction,
>> > because I really would like to improve the documentation & tools.
>>
>>
>>
>>
[-- Attachment #2: Type: text/html, Size: 10155 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-21 20:10 ` Aaron Lee
@ 2022-02-22 7:10 ` Thomas Monjalon
2022-02-25 18:29 ` Aaron Lee
0 siblings, 1 reply; 8+ messages in thread
From: Thomas Monjalon @ 2022-02-22 7:10 UTC (permalink / raw)
To: Aaron Lee; +Cc: users, asafp
21/02/2022 21:10, Aaron Lee:
> Hi Thomas,
>
> Actually I remembered in my previous setup I had run dpdk-devbind.py to
> bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do
> this and just wanted to confirm that this is correct.
Indeed, mlx5 PMD runs on top of mlx5 kernel driver.
We don't need UIO or VFIO drivers.
The kernel modules must remain loaded and can be used in the same time.
When DPDK is working, the traffic goes to the userspace PMD by default,
but it is possible to configure some flows to go directly to the kernel driver.
This behaviour is called "bifurcated model".
> On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee <acl049@ucsd.edu> wrote:
>
> > Hi Thomas,
> >
> > I tried installing things from scratch two days ago and have gotten
> > things working! I think part of the problem was figuring out the correct
> > hugepage allocation for my system. If I recall correctly, I tried setting
> > up my system with default page size 1G but perhaps didn't have enough pages
> > allocated at the time. Currently have the following which gives me the
> > output you've shown previously.
> >
> > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> > Node Pages Size Total
> > 0 16 1Gb 16Gb
> > 1 16 1Gb 16Gb
> >
> > root@yeti-04:~/dpdk-21.11# echo show port summary all |
> > build/app/dpdk-testpmd --in-memory -- -i
> > EAL: Detected CPU lcores: 80
> > EAL: Detected NUMA nodes: 2
> > EAL: Detected static linkage of DPDK
> > EAL: Selected IOVA mode 'PA'
> > EAL: No free 2048 kB hugepages reported on node 0
> > EAL: No free 2048 kB hugepages reported on node 1
> > EAL: No available 2048 kB hugepages reported
> > EAL: VFIO support initialized
> > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
> > TELEMETRY: No legacy callbacks, legacy socket not created
> > Interactive-mode selected
> > testpmd: create a new mbuf pool <mb_pool_0>: n=779456, size=2176, socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mb_pool_1>: n=779456, size=2176, socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> >
> > Warning! port-topology=paired and odd forward ports number, the last port
> > will pair with itself.
> >
> > Configuring Port 0 (socket 1)
> > Port 0: EC:0D:9A:68:21:A8
> > Checking link statuses...
> > Done
> > testpmd> show port summary all
> > Number of available ports: 1
> > Port MAC Address Name Driver Status Link
> > 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps
> >
> > Best,
> > Aaron
> >
> > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net>
> > wrote:
> >
> >> 21/02/2022 19:52, Thomas Monjalon:
> >> > 18/02/2022 22:12, Aaron Lee:
> >> > > Hello,
> >> > >
> >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but I'm
> >> > > wondering if the card I have simply isn't compatible. I first noticed
> >> that
> >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the
> >> error
> >> > > logs when running dpdk-pdump.
> >> >
> >> > When testing a NIC, it is more convenient to use dpdk-testpmd.
> >> >
> >> > > EAL: Detected CPU lcores: 80
> >> > > EAL: Detected NUMA nodes: 2
> >> > > EAL: Detected static linkage of DPDK
> >> > > EAL: Multi-process socket
> >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> >> > > vdev_scan(): Failed to request vdev from primary
> >> > > EAL: Selected IOVA mode 'PA'
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> >> > > EAL: Cannot request default VFIO container fd
> >> > > EAL: VFIO support could not be initialized
> >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0
> >> (socket 1)
> >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No such
> >> file or
> >> > > directory
> >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> >> > > mlx5_common: port 0 request to primary process failed
> >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after encountering
> >> an
> >> > > error: No such file or directory
> >> > > mlx5_common: Failed to load driver mlx5_eth
> >> > > EAL: Requested device 0000:af:00.0 cannot be used
> >> > > EAL: Error - exiting with code: 1
> >> > > Cause: No Ethernet ports - bye
> >> >
> >> > From this log, we miss the previous steps before running the
> >> application.
> >> >
> >> > Please check these simple steps:
> >> > - install rdma-core
> >> > - build dpdk (meson build && ninja -C build)
> >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd
> >> --in-memory -- -i)
> >> >
> >> > EAL: Detected CPU lcores: 10
> >> > EAL: Detected NUMA nodes: 1
> >> > EAL: Detected static linkage of DPDK
> >> > EAL: Selected IOVA mode 'PA'
> >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0
> >> (socket 0)
> >> > Interactive-mode selected
> >> > testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176,
> >> socket=0
> >> > testpmd: preferred mempool ops selected: ring_mp_mc
> >> > Configuring Port 0 (socket 0)
> >> > Port 0: 0C:42:A1:D6:E0:00
> >> > Checking link statuses...
> >> > Done
> >> > testpmd> show port summary all
> >> > Number of available ports: 1
> >> > Port MAC Address Name Driver Status Link
> >> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps
> >> >
> >> > > I noticed that the pci id of the card I was given is 15b3:1017 as
> >> below.
> >> > > This sort of indicates to me that the PMD driver isn't supported on
> >> this
> >> > > card.
> >> >
> >> > This card is well supported and even officially tested with DPDK 21.11,
> >> > as you can see in the release notes:
> >> >
> >> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=
> >> >
> >> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800
> >> Family
> >> > > [ConnectX-5] [15b3:1017]
> >> > >
> >> > > I'd appreciate it if someone has gotten this card to work with DPDK to
> >> > > point me in the right direction or if my suspicions were correct that
> >> this
> >> > > card doesn't work with the PMD.
> >>
> >> If you want to check which hardware is supported by a PMD,
> >> you can use this command:
> >>
> >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so
> >> PMD NAME: mlx5_eth
> >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
> >> PMD HW SUPPORT:
> >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013) (All
> >> Subdevices)
> >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual
> >> Function] (1014) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015)
> >> (All Subdevices)
> >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual
> >> Function] (1016) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017) (All
> >> Subdevices)
> >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual
> >> Function] (1018) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019)
> >> (All Subdevices)
> >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual
> >> Function] (101a) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT416842 BlueField integrated ConnectX-5
> >> network controller (a2d2) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC family
> >> VF (a2d3) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b) (All
> >> Subdevices)
> >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual
> >> Function] (101c) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d) (All
> >> Subdevices)
> >> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual Function
> >> (101e) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated ConnectX-6
> >> Dx network controller (a2d6) (All Subdevices)
> >> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f) (All
> >> Subdevices)
> >> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All
> >> Subdevices)
> >> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated ConnectX-7
> >> network controller (a2dc) (All Subdevices)
> >>
> >> > Please tell me what drove you into the wrong direction,
> >> > because I really would like to improve the documentation & tools.
> >>
> >>
> >>
> >>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-22 7:10 ` Thomas Monjalon
@ 2022-02-25 18:29 ` Aaron Lee
2022-02-25 23:13 ` Thomas Monjalon
0 siblings, 1 reply; 8+ messages in thread
From: Aaron Lee @ 2022-02-25 18:29 UTC (permalink / raw)
To: Thomas Monjalon; +Cc: users
[-- Attachment #1: Type: text/plain, Size: 11795 bytes --]
Hi Thomas,
I was doing some more testing and wanted to increase the RX queues for the
CX5 but was wondering how I could do that. I see in the usage example in
the docs, I could pass in --rxq=2 --txq=2 to set the queues to 2 each but I
don't see that in my output when I run the command. Below is the output
from running the command in
https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean
that the MCX515A-CCAT I have can't support more than 1 queue or am I
supposed to configure another setting?
EAL: Detected 80 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
EAL: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=203456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.
Configuring Port 0 (socket 1)
mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
Port 0: EC:0D:9A:68:21:A8
Checking link statuses...
Done
mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
Best,
Aaron
On Mon, Feb 21, 2022 at 11:10 PM Thomas Monjalon <thomas@monjalon.net>
wrote:
> 21/02/2022 21:10, Aaron Lee:
> > Hi Thomas,
> >
> > Actually I remembered in my previous setup I had run dpdk-devbind.py to
> > bind the mlx5 NIC to igb_uio. I read somewhere that you don't need to do
> > this and just wanted to confirm that this is correct.
>
> Indeed, mlx5 PMD runs on top of mlx5 kernel driver.
> We don't need UIO or VFIO drivers.
> The kernel modules must remain loaded and can be used in the same time.
> When DPDK is working, the traffic goes to the userspace PMD by default,
> but it is possible to configure some flows to go directly to the kernel
> driver.
> This behaviour is called "bifurcated model".
>
>
> > On Mon, Feb 21, 2022 at 11:45 AM Aaron Lee <acl049@ucsd.edu> wrote:
> >
> > > Hi Thomas,
> > >
> > > I tried installing things from scratch two days ago and have gotten
> > > things working! I think part of the problem was figuring out the
> correct
> > > hugepage allocation for my system. If I recall correctly, I tried
> setting
> > > up my system with default page size 1G but perhaps didn't have enough
> pages
> > > allocated at the time. Currently have the following which gives me the
> > > output you've shown previously.
> > >
> > > root@yeti-04:~/dpdk-21.11# usertools/dpdk-hugepages.py -s
> > > Node Pages Size Total
> > > 0 16 1Gb 16Gb
> > > 1 16 1Gb 16Gb
> > >
> > > root@yeti-04:~/dpdk-21.11# echo show port summary all |
> > > build/app/dpdk-testpmd --in-memory -- -i
> > > EAL: Detected CPU lcores: 80
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Detected static linkage of DPDK
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: No free 2048 kB hugepages reported on node 0
> > > EAL: No free 2048 kB hugepages reported on node 1
> > > EAL: No available 2048 kB hugepages reported
> > > EAL: VFIO support initialized
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0
> (socket 1)
> > > TELEMETRY: No legacy callbacks, legacy socket not created
> > > Interactive-mode selected
> > > testpmd: create a new mbuf pool <mb_pool_0>: n=779456, size=2176,
> socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mb_pool_1>: n=779456, size=2176,
> socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > >
> > > Warning! port-topology=paired and odd forward ports number, the last
> port
> > > will pair with itself.
> > >
> > > Configuring Port 0 (socket 1)
> > > Port 0: EC:0D:9A:68:21:A8
> > > Checking link statuses...
> > > Done
> > > testpmd> show port summary all
> > > Number of available ports: 1
> > > Port MAC Address Name Driver Status Link
> > > 0 EC:0D:9A:68:21:A8 0000:af:00.0 mlx5_pci up 100 Gbps
> > >
> > > Best,
> > > Aaron
> > >
> > > On Mon, Feb 21, 2022 at 11:03 AM Thomas Monjalon <thomas@monjalon.net>
> > > wrote:
> > >
> > >> 21/02/2022 19:52, Thomas Monjalon:
> > >> > 18/02/2022 22:12, Aaron Lee:
> > >> > > Hello,
> > >> > >
> > >> > > I'm trying to get my ConnectX5 NIC working with DPDK v21.11 but
> I'm
> > >> > > wondering if the card I have simply isn't compatible. I first
> noticed
> > >> that
> > >> > > the model I was given is MCX515A-CCA_Ax_Bx. Below are some of the
> > >> error
> > >> > > logs when running dpdk-pdump.
> > >> >
> > >> > When testing a NIC, it is more convenient to use dpdk-testpmd.
> > >> >
> > >> > > EAL: Detected CPU lcores: 80
> > >> > > EAL: Detected NUMA nodes: 2
> > >> > > EAL: Detected static linkage of DPDK
> > >> > > EAL: Multi-process socket
> > >> /var/run/dpdk/rte/mp_socket_383403_1ac7441297c92
> > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No
> such
> > >> file or
> > >> > > directory
> > >> > > EAL: Fail to send request /var/run/dpdk/rte/mp_socket:bus_vdev_mp
> > >> > > vdev_scan(): Failed to request vdev from primary
> > >> > > EAL: Selected IOVA mode 'PA'
> > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No
> such
> > >> file or
> > >> > > directory
> > >> > > EAL: Fail to send request
> /var/run/dpdk/rte/mp_socket:eal_vfio_mp_sync
> > >> > > EAL: Cannot request default VFIO container fd
> > >> > > EAL: VFIO support could not be initialized
> > >> > > EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0
> > >> (socket 1)
> > >> > > EAL: failed to send to (/var/run/dpdk/rte/mp_socket) due to No
> such
> > >> file or
> > >> > > directory
> > >> > > EAL: Fail to send request
> /var/run/dpdk/rte/mp_socket:common_mlx5_mp
> > >> > > mlx5_common: port 0 request to primary process failed
> > >> > > mlx5_net: probe of PCI device 0000:af:00.0 aborted after
> encountering
> > >> an
> > >> > > error: No such file or directory
> > >> > > mlx5_common: Failed to load driver mlx5_eth
> > >> > > EAL: Requested device 0000:af:00.0 cannot be used
> > >> > > EAL: Error - exiting with code: 1
> > >> > > Cause: No Ethernet ports - bye
> > >> >
> > >> > From this log, we miss the previous steps before running the
> > >> application.
> > >> >
> > >> > Please check these simple steps:
> > >> > - install rdma-core
> > >> > - build dpdk (meson build && ninja -C build)
> > >> > - reserve hugepages (usertools/dpdk-hugepages.py -r 1G)
> > >> > - run testpmd (echo show port summary all | build/app/dpdk-testpmd
> > >> --in-memory -- -i)
> > >> >
> > >> > EAL: Detected CPU lcores: 10
> > >> > EAL: Detected NUMA nodes: 1
> > >> > EAL: Detected static linkage of DPDK
> > >> > EAL: Selected IOVA mode 'PA'
> > >> > EAL: Probe PCI driver: mlx5_pci (15b3:101f) device: 0000:08:00.0
> > >> (socket 0)
> > >> > Interactive-mode selected
> > >> > testpmd: create a new mbuf pool <mb_pool_0>: n=219456, size=2176,
> > >> socket=0
> > >> > testpmd: preferred mempool ops selected: ring_mp_mc
> > >> > Configuring Port 0 (socket 0)
> > >> > Port 0: 0C:42:A1:D6:E0:00
> > >> > Checking link statuses...
> > >> > Done
> > >> > testpmd> show port summary all
> > >> > Number of available ports: 1
> > >> > Port MAC Address Name Driver Status Link
> > >> > 0 0C:42:A1:D6:E0:00 08:00.0 mlx5_pci up 25 Gbps
> > >> >
> > >> > > I noticed that the pci id of the card I was given is 15b3:1017 as
> > >> below.
> > >> > > This sort of indicates to me that the PMD driver isn't supported
> on
> > >> this
> > >> > > card.
> > >> >
> > >> > This card is well supported and even officially tested with DPDK
> 21.11,
> > >> > as you can see in the release notes:
> > >> >
> > >>
> https://urldefense.proofpoint.com/v2/url?u=https-3A__doc.dpdk.org_guides_rel-5Fnotes_release-5F21-5F11.html-23tested-2Dplatforms&d=DwICAg&c=-35OiAkTchMrZOngvJPOeA&r=hV5L_ta1W9AMUIlRhnTmeA&m=Sjlw2sMlSxVzIY1zsNBhZueu7hv1__H7yXdaC5vzGYswqkAc_F9-SOmlhbM-J6yO&s=ioqgYPqVWdF2XE0iOZ4AZn5Vw_NGmtr5m9fYCf_TY9A&e=
> > >> >
> > >> > > af:00.0 Ethernet controller [0200]: Mellanox Technologies MT27800
> > >> Family
> > >> > > [ConnectX-5] [15b3:1017]
> > >> > >
> > >> > > I'd appreciate it if someone has gotten this card to work with
> DPDK to
> > >> > > point me in the right direction or if my suspicions were correct
> that
> > >> this
> > >> > > card doesn't work with the PMD.
> > >>
> > >> If you want to check which hardware is supported by a PMD,
> > >> you can use this command:
> > >>
> > >> usertools/dpdk-pmdinfo.py build/drivers/librte_net_mlx5.so
> > >> PMD NAME: mlx5_eth
> > >> PMD KMOD DEPENDENCIES: * ib_uverbs & mlx5_core & mlx5_ib
> > >> PMD HW SUPPORT:
> > >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4] (1013)
> (All
> > >> Subdevices)
> > >> Mellanox Technologies (15b3) : MT27700 Family [ConnectX-4 Virtual
> > >> Function] (1014) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx] (1015)
> > >> (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT27710 Family [ConnectX-4 Lx Virtual
> > >> Function] (1016) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5] (1017)
> (All
> > >> Subdevices)
> > >> Mellanox Technologies (15b3) : MT27800 Family [ConnectX-5 Virtual
> > >> Function] (1018) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex] (1019)
> > >> (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT28800 Family [ConnectX-5 Ex Virtual
> > >> Function] (101a) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT416842 BlueField integrated
> ConnectX-5
> > >> network controller (a2d2) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT416842 BlueField multicore SoC
> family
> > >> VF (a2d3) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6] (101b)
> (All
> > >> Subdevices)
> > >> Mellanox Technologies (15b3) : MT28908 Family [ConnectX-6 Virtual
> > >> Function] (101c) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT2892 Family [ConnectX-6 Dx] (101d)
> (All
> > >> Subdevices)
> > >> Mellanox Technologies (15b3) : ConnectX Family mlx5Gen Virtual
> Function
> > >> (101e) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT42822 BlueField-2 integrated
> ConnectX-6
> > >> Dx network controller (a2d6) (All Subdevices)
> > >> Mellanox Technologies (15b3) : MT2894 Family [ConnectX-6 Lx] (101f)
> (All
> > >> Subdevices)
> > >> Mellanox Technologies (15b3) : MT2910 Family [ConnectX-7] (1021) (All
> > >> Subdevices)
> > >> Mellanox Technologies (15b3) : MT43244 BlueField-3 integrated
> ConnectX-7
> > >> network controller (a2dc) (All Subdevices)
> > >>
> > >> > Please tell me what drove you into the wrong direction,
> > >> > because I really would like to improve the documentation & tools.
> > >>
> > >>
> > >>
> > >>
> >
>
>
>
>
>
>
[-- Attachment #2: Type: text/html, Size: 22381 bytes --]
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: ConnectX5 Setup with DPDK
2022-02-25 18:29 ` Aaron Lee
@ 2022-02-25 23:13 ` Thomas Monjalon
0 siblings, 0 replies; 8+ messages in thread
From: Thomas Monjalon @ 2022-02-25 23:13 UTC (permalink / raw)
To: Aaron Lee; +Cc: users
25/02/2022 19:29, Aaron Lee:
> Hi Thomas,
>
> I was doing some more testing and wanted to increase the RX queues for the
> CX5 but was wondering how I could do that. I see in the usage example in
> the docs, I could pass in --rxq=2 --txq=2 to set the queues to 2 each but I
> don't see that in my output when I run the command. Below is the output
> from running the command in
> https://doc.dpdk.org/guides/nics/mlx5.html#usage-example. Does this mean
> that the MCX515A-CCAT I have can't support more than 1 queue or am I
> supposed to configure another setting?
I see nothing about the number of queues in your output.
You should try the command "show config rxtx".
> EAL: Detected 80 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:af:00.0 (socket 1)
> mlx5_pci: Size 0xFFFF is not power of 2, will be aligned to 0x10000.
> EAL: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mb_pool_1>: n=203456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
>
> Configuring Port 0 (socket 1)
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
> Port 0: EC:0D:9A:68:21:A8
> Checking link statuses...
> Done
> mlx5_pci: Failed to init cache list FDB_ingress_0_matcher_cache entry (nil).
>
> Best,
> Aaron
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2022-02-25 23:13 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-18 21:12 ConnectX5 Setup with DPDK Aaron Lee
2022-02-21 18:52 ` Thomas Monjalon
2022-02-21 19:03 ` Thomas Monjalon
2022-02-21 19:45 ` Aaron Lee
2022-02-21 20:10 ` Aaron Lee
2022-02-22 7:10 ` Thomas Monjalon
2022-02-25 18:29 ` Aaron Lee
2022-02-25 23:13 ` Thomas Monjalon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).