DPDK usage discussions
 help / color / mirror / Atom feed
* Trouble bringing up dpdk testpmd with Mellanox ports
@ 2022-01-12 13:28 Sindhura Bandi
  2022-01-17 16:26 ` PATRICK KEROULAS
  0 siblings, 1 reply; 4+ messages in thread
From: Sindhura Bandi @ 2022-01-12 13:28 UTC (permalink / raw)
  To: users; +Cc: Venugopal Thacahappilly

[-- Attachment #1: Type: text/plain, Size: 3296 bytes --]

Hi,


I'm trying to bring up dpdk-testpmd application using Mellanox connectX-5 ports. With a custom built dpdk, testpmd is not able to detect the ports.


OS & Kernel:

Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 GNU/Linux

The steps followed:

  *   Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64 (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
  *   Downloaded dpdk-18.11 source, and built it after making following changes in config

           CONFIG_RTE_LIBRTE_MLX5_PMD=y
           CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
           CONFIG_RTE_BUILD_SHARED_LIB=y

  *   When I run testpmd, it is not recognizing any Mellanox ports


#########
root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0 --no-pci -- --total-num-mbufs 1025
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=0
Press enter to exit
##########

root@debian-10:~# lspci | grep Mellanox
82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
root@debian-10:~# ibv_devinfo
hca_id:    mlx5_0
    transport:            InfiniBand (0)
    fw_ver:                16.28.4512
    node_guid:            b8ce:f603:00f2:7952
    sys_image_guid:            b8ce:f603:00f2:7952
    vendor_id:            0x02c9
    vendor_part_id:            4121
    hw_ver:                0x0
    board_id:            DEL0000000004
    phys_port_cnt:            1
        port:    1
            state:            PORT_ACTIVE (4)
            max_mtu:        4096 (5)
            active_mtu:        1024 (3)
            sm_lid:            0
            port_lid:        0
            port_lmc:        0x00
            link_layer:        Ethernet

hca_id:    mlx5_1
    transport:            InfiniBand (0)
    fw_ver:                16.28.4512
    node_guid:            b8ce:f603:00f2:7953
    sys_image_guid:            b8ce:f603:00f2:7952
    vendor_id:            0x02c9
    vendor_part_id:            4121
    hw_ver:                0x0
    board_id:            DEL0000000004
    phys_port_cnt:            1
        port:    1
            state:            PORT_ACTIVE (4)
            max_mtu:        4096 (5)
            active_mtu:        1024 (3)
            sm_lid:            0
            port_lid:        0
            port_lmc:        0x00
            link_layer:        Ethernet


I'm not sure where I'm going wrong. Any hints will be much appreciated.

Thanks,
Sindhu

[-- Attachment #2: Type: text/html, Size: 6894 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Trouble bringing up dpdk testpmd with Mellanox ports
  2022-01-12 13:28 Trouble bringing up dpdk testpmd with Mellanox ports Sindhura Bandi
@ 2022-01-17 16:26 ` PATRICK KEROULAS
  2022-01-24 17:43   ` Sindhura Bandi
  0 siblings, 1 reply; 4+ messages in thread
From: PATRICK KEROULAS @ 2022-01-17 16:26 UTC (permalink / raw)
  To: Sindhura Bandi; +Cc: users, Venugopal Thacahappilly

Hello,
Try without `--no-pci` in your testpmd command.

On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi
<sindhura.bandi@certesnetworks.com> wrote:
>
> Hi,
>
>
> I'm trying to bring up dpdk-testpmd application using Mellanox connectX-5 ports. With a custom built dpdk, testpmd is not able to detect the ports.
>
>
> OS & Kernel:
>
> Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 GNU/Linux
>
> The steps followed:
>
> Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64 (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
> Downloaded dpdk-18.11 source, and built it after making following changes in config
>
>            CONFIG_RTE_LIBRTE_MLX5_PMD=y
>            CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
>            CONFIG_RTE_BUILD_SHARED_LIB=y
>
> When I run testpmd, it is not recognizing any Mellanox ports
>
>
> #########
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0 --no-pci -- --total-num-mbufs 1025
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=0
> Press enter to exit
> ##########
>
> root@debian-10:~# lspci | grep Mellanox
> 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
> 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
> root@debian-10:~# ibv_devinfo
> hca_id:    mlx5_0
>     transport:            InfiniBand (0)
>     fw_ver:                16.28.4512
>     node_guid:            b8ce:f603:00f2:7952
>     sys_image_guid:            b8ce:f603:00f2:7952
>     vendor_id:            0x02c9
>     vendor_part_id:            4121
>     hw_ver:                0x0
>     board_id:            DEL0000000004
>     phys_port_cnt:            1
>         port:    1
>             state:            PORT_ACTIVE (4)
>             max_mtu:        4096 (5)
>             active_mtu:        1024 (3)
>             sm_lid:            0
>             port_lid:        0
>             port_lmc:        0x00
>             link_layer:        Ethernet
>
> hca_id:    mlx5_1
>     transport:            InfiniBand (0)
>     fw_ver:                16.28.4512
>     node_guid:            b8ce:f603:00f2:7953
>     sys_image_guid:            b8ce:f603:00f2:7952
>     vendor_id:            0x02c9
>     vendor_part_id:            4121
>     hw_ver:                0x0
>     board_id:            DEL0000000004
>     phys_port_cnt:            1
>         port:    1
>             state:            PORT_ACTIVE (4)
>             max_mtu:        4096 (5)
>             active_mtu:        1024 (3)
>             sm_lid:            0
>             port_lid:        0
>             port_lmc:        0x00
>             link_layer:        Ethernet
>
>
> I'm not sure where I'm going wrong. Any hints will be much appreciated.
>
> Thanks,
> Sindhu


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Trouble bringing up dpdk testpmd with Mellanox ports
  2022-01-17 16:26 ` PATRICK KEROULAS
@ 2022-01-24 17:43   ` Sindhura Bandi
  2022-01-27 12:38     ` madhukar mythri
  0 siblings, 1 reply; 4+ messages in thread
From: Sindhura Bandi @ 2022-01-24 17:43 UTC (permalink / raw)
  To: PATRICK KEROULAS; +Cc: users, Venugopal Thacahappilly

[-- Attachment #1: Type: text/plain, Size: 5277 bytes --]

Hi,


Thank you for the response.

I tried what you suggested, but with the same result.


##################

root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12  -w 82:00.0  -- --total-num-mbufs 1025
./bin/testpmd: error while loading shared libraries: librte_pmd_bond.so.2.1: cannot open shared object file: No such file or directory
root@debian-10:~/dpdk-18.11/myinstall# export LD_LIBRARY_PATH=/root/dpdk-18.11/myinstall/share/dpdk/x86_64-native-linuxapp-gcc/lib
root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12  -w 82:00.0  -- --total-num-mbufs 1025
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=0
Press enter to exit
####################

-Sindhu

________________________________
From: PATRICK KEROULAS <patrick.keroulas@radio-canada.ca>
Sent: Monday, January 17, 2022 11:26:18 AM
To: Sindhura Bandi
Cc: users@dpdk.org; Venugopal Thacahappilly
Subject: Re: Trouble bringing up dpdk testpmd with Mellanox ports

Hello,
Try without `--no-pci` in your testpmd command.

On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi
<sindhura.bandi@certesnetworks.com> wrote:
>
> Hi,
>
>
> I'm trying to bring up dpdk-testpmd application using Mellanox connectX-5 ports. With a custom built dpdk, testpmd is not able to detect the ports.
>
>
> OS & Kernel:
>
> Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 GNU/Linux
>
> The steps followed:
>
> Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64 (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
> Downloaded dpdk-18.11 source, and built it after making following changes in config
>
>            CONFIG_RTE_LIBRTE_MLX5_PMD=y
>            CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
>            CONFIG_RTE_BUILD_SHARED_LIB=y
>
> When I run testpmd, it is not recognizing any Mellanox ports
>
>
> #########
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0 --no-pci -- --total-num-mbufs 1025
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=0
> Press enter to exit
> ##########
>
> root@debian-10:~# lspci | grep Mellanox
> 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
> 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
> root@debian-10:~# ibv_devinfo
> hca_id:    mlx5_0
>     transport:            InfiniBand (0)
>     fw_ver:                16.28.4512
>     node_guid:            b8ce:f603:00f2:7952
>     sys_image_guid:            b8ce:f603:00f2:7952
>     vendor_id:            0x02c9
>     vendor_part_id:            4121
>     hw_ver:                0x0
>     board_id:            DEL0000000004
>     phys_port_cnt:            1
>         port:    1
>             state:            PORT_ACTIVE (4)
>             max_mtu:        4096 (5)
>             active_mtu:        1024 (3)
>             sm_lid:            0
>             port_lid:        0
>             port_lmc:        0x00
>             link_layer:        Ethernet
>
> hca_id:    mlx5_1
>     transport:            InfiniBand (0)
>     fw_ver:                16.28.4512
>     node_guid:            b8ce:f603:00f2:7953
>     sys_image_guid:            b8ce:f603:00f2:7952
>     vendor_id:            0x02c9
>     vendor_part_id:            4121
>     hw_ver:                0x0
>     board_id:            DEL0000000004
>     phys_port_cnt:            1
>         port:    1
>             state:            PORT_ACTIVE (4)
>             max_mtu:        4096 (5)
>             active_mtu:        1024 (3)
>             sm_lid:            0
>             port_lid:        0
>             port_lmc:        0x00
>             link_layer:        Ethernet
>
>
> I'm not sure where I'm going wrong. Any hints will be much appreciated.
>
> Thanks,
> Sindhu


[-- Attachment #2: Type: text/html, Size: 10303 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Trouble bringing up dpdk testpmd with Mellanox ports
  2022-01-24 17:43   ` Sindhura Bandi
@ 2022-01-27 12:38     ` madhukar mythri
  0 siblings, 0 replies; 4+ messages in thread
From: madhukar mythri @ 2022-01-27 12:38 UTC (permalink / raw)
  To: Sindhura Bandi; +Cc: PATRICK KEROULAS, users, Venugopal Thacahappilly

[-- Attachment #1: Type: text/plain, Size: 5780 bytes --]

Hi,

Make-sure the Kernel drivers(mlx5) were loaded properly on the Mellonox
devices.
In DPDK-19.11, it works well, try with PCI domain and '-n' option as
follows:

./bin/testpmd -l 10-12 -n 1  -w 0000:82:00.0  --

Regards,
Madhukar.


On Thu, Jan 27, 2022 at 1:46 PM Sindhura Bandi <
sindhura.bandi@certesnetworks.com> wrote:

> Hi,
>
>
> Thank you for the response.
>
> I tried what you suggested, but with the same result.
>
>
> ##################
>
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12  -w
> 82:00.0  -- --total-num-mbufs 1025
> ./bin/testpmd: error while loading shared libraries:
> librte_pmd_bond.so.2.1: cannot open shared object file: No such file or
> directory
> root@debian-10:~/dpdk-18.11/myinstall# export
> LD_LIBRARY_PATH=/root/dpdk-18.11/myinstall/share/dpdk/x86_64-native-linuxapp-gcc/lib
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12  -w
> 82:00.0  -- --total-num-mbufs 1025
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176,
> socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support
> enabled, MP allocation mode: native
>
>   io packet forwarding packets/burst=32
>   nb forwarding cores=1 - nb forwarding ports=0
> Press enter to exit
> ####################
>
> -Sindhu
>
> ------------------------------
> *From:* PATRICK KEROULAS <patrick.keroulas@radio-canada.ca>
> *Sent:* Monday, January 17, 2022 11:26:18 AM
> *To:* Sindhura Bandi
> *Cc:* users@dpdk.org; Venugopal Thacahappilly
> *Subject:* Re: Trouble bringing up dpdk testpmd with Mellanox ports
>
> Hello,
> Try without `--no-pci` in your testpmd command.
>
> On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi
> <sindhura.bandi@certesnetworks.com> wrote:
> >
> > Hi,
> >
> >
> > I'm trying to bring up dpdk-testpmd application using Mellanox
> connectX-5 ports. With a custom built dpdk, testpmd is not able to detect
> the ports.
> >
> >
> > OS & Kernel:
> >
> > Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21)
> x86_64 GNU/Linux
> >
> > The steps followed:
> >
> > Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64
> (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
> > Downloaded dpdk-18.11 source, and built it after making following
> changes in config
> >
> >            CONFIG_RTE_LIBRTE_MLX5_PMD=y
> >            CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
> >            CONFIG_RTE_BUILD_SHARED_LIB=y
> >
> > When I run testpmd, it is not recognizing any Mellanox ports
> >
> >
> > #########
> > root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0
> --no-pci -- --total-num-mbufs 1025
> > EAL: Detected 24 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > testpmd: No probed ethernet devices
> > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176,
> socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176,
> socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > Done
> > No commandline core given, start packet forwarding
> > io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support
> enabled, MP allocation mode: native
> >
> >   io packet forwarding packets/burst=32
> >   nb forwarding cores=1 - nb forwarding ports=0
> > Press enter to exit
> > ##########
> >
> > root@debian-10:~# lspci | grep Mellanox
> > 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family
> [ConnectX-5 Ex]
> > 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family
> [ConnectX-5 Ex]
> > root@debian-10:~# ibv_devinfo
> > hca_id:    mlx5_0
> >     transport:            InfiniBand (0)
> >     fw_ver:                16.28.4512
> >     node_guid:            b8ce:f603:00f2:7952
> >     sys_image_guid:            b8ce:f603:00f2:7952
> >     vendor_id:            0x02c9
> >     vendor_part_id:            4121
> >     hw_ver:                0x0
> >     board_id:            DEL0000000004
> >     phys_port_cnt:            1
> >         port:    1
> >             state:            PORT_ACTIVE (4)
> >             max_mtu:        4096 (5)
> >             active_mtu:        1024 (3)
> >             sm_lid:            0
> >             port_lid:        0
> >             port_lmc:        0x00
> >             link_layer:        Ethernet
> >
> > hca_id:    mlx5_1
> >     transport:            InfiniBand (0)
> >     fw_ver:                16.28.4512
> >     node_guid:            b8ce:f603:00f2:7953
> >     sys_image_guid:            b8ce:f603:00f2:7952
> >     vendor_id:            0x02c9
> >     vendor_part_id:            4121
> >     hw_ver:                0x0
> >     board_id:            DEL0000000004
> >     phys_port_cnt:            1
> >         port:    1
> >             state:            PORT_ACTIVE (4)
> >             max_mtu:        4096 (5)
> >             active_mtu:        1024 (3)
> >             sm_lid:            0
> >             port_lid:        0
> >             port_lmc:        0x00
> >             link_layer:        Ethernet
> >
> >
> > I'm not sure where I'm going wrong. Any hints will be much appreciated.
> >
> > Thanks,
> > Sindhu
>
>

[-- Attachment #2: Type: text/html, Size: 8670 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-01-27 12:38 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-12 13:28 Trouble bringing up dpdk testpmd with Mellanox ports Sindhura Bandi
2022-01-17 16:26 ` PATRICK KEROULAS
2022-01-24 17:43   ` Sindhura Bandi
2022-01-27 12:38     ` madhukar mythri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).