DPDK usage discussions
 help / color / mirror / Atom feed
* Trouble bringing up dpdk testpmd with Mellanox ports
@ 2022-01-12 13:28 Sindhura Bandi
  2022-01-17 16:26 ` PATRICK KEROULAS
  0 siblings, 1 reply; 4+ messages in thread
From: Sindhura Bandi @ 2022-01-12 13:28 UTC (permalink / raw)
  To: users; +Cc: Venugopal Thacahappilly

[-- Attachment #1: Type: text/plain, Size: 3296 bytes --]

Hi,


I'm trying to bring up dpdk-testpmd application using Mellanox connectX-5 ports. With a custom built dpdk, testpmd is not able to detect the ports.


OS & Kernel:

Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21) x86_64 GNU/Linux

The steps followed:

  *   Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64 (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
  *   Downloaded dpdk-18.11 source, and built it after making following changes in config

           CONFIG_RTE_LIBRTE_MLX5_PMD=y
           CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
           CONFIG_RTE_BUILD_SHARED_LIB=y

  *   When I run testpmd, it is not recognizing any Mellanox ports


#########
root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3  -w 82:00.0 --no-pci -- --total-num-mbufs 1025
EAL: Detected 24 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
testpmd: No probed ethernet devices
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Done
No commandline core given, start packet forwarding
io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support enabled, MP allocation mode: native

  io packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=0
Press enter to exit
##########

root@debian-10:~# lspci | grep Mellanox
82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family [ConnectX-5 Ex]
root@debian-10:~# ibv_devinfo
hca_id:    mlx5_0
    transport:            InfiniBand (0)
    fw_ver:                16.28.4512
    node_guid:            b8ce:f603:00f2:7952
    sys_image_guid:            b8ce:f603:00f2:7952
    vendor_id:            0x02c9
    vendor_part_id:            4121
    hw_ver:                0x0
    board_id:            DEL0000000004
    phys_port_cnt:            1
        port:    1
            state:            PORT_ACTIVE (4)
            max_mtu:        4096 (5)
            active_mtu:        1024 (3)
            sm_lid:            0
            port_lid:        0
            port_lmc:        0x00
            link_layer:        Ethernet

hca_id:    mlx5_1
    transport:            InfiniBand (0)
    fw_ver:                16.28.4512
    node_guid:            b8ce:f603:00f2:7953
    sys_image_guid:            b8ce:f603:00f2:7952
    vendor_id:            0x02c9
    vendor_part_id:            4121
    hw_ver:                0x0
    board_id:            DEL0000000004
    phys_port_cnt:            1
        port:    1
            state:            PORT_ACTIVE (4)
            max_mtu:        4096 (5)
            active_mtu:        1024 (3)
            sm_lid:            0
            port_lid:        0
            port_lmc:        0x00
            link_layer:        Ethernet


I'm not sure where I'm going wrong. Any hints will be much appreciated.

Thanks,
Sindhu

[-- Attachment #2: Type: text/html, Size: 6894 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-01-27 12:38 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-01-12 13:28 Trouble bringing up dpdk testpmd with Mellanox ports Sindhura Bandi
2022-01-17 16:26 ` PATRICK KEROULAS
2022-01-24 17:43   ` Sindhura Bandi
2022-01-27 12:38     ` madhukar mythri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).