From: madhukar mythri <madhukar.mythri@gmail.com>
To: Sindhura Bandi <sindhura.bandi@certesnetworks.com>
Cc: PATRICK KEROULAS <patrick.keroulas@radio-canada.ca>,
"users@dpdk.org" <users@dpdk.org>,
Venugopal Thacahappilly <venugopal@certesnetworks.com>
Subject: Re: Trouble bringing up dpdk testpmd with Mellanox ports
Date: Thu, 27 Jan 2022 18:08:11 +0530 [thread overview]
Message-ID: <CAAUNki3D8ONVAJwy0YBBwKYLpByqvy6HFDbyCNPeOCE6FShaNg@mail.gmail.com> (raw)
In-Reply-To: <325bfb23ee5849bb90b69b837b412403@certesnetworks.com>
[-- Attachment #1: Type: text/plain, Size: 5780 bytes --]
Hi,
Make-sure the Kernel drivers(mlx5) were loaded properly on the Mellonox
devices.
In DPDK-19.11, it works well, try with PCI domain and '-n' option as
follows:
./bin/testpmd -l 10-12 -n 1 -w 0000:82:00.0 --
Regards,
Madhukar.
On Thu, Jan 27, 2022 at 1:46 PM Sindhura Bandi <
sindhura.bandi@certesnetworks.com> wrote:
> Hi,
>
>
> Thank you for the response.
>
> I tried what you suggested, but with the same result.
>
>
> ##################
>
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12 -w
> 82:00.0 -- --total-num-mbufs 1025
> ./bin/testpmd: error while loading shared libraries:
> librte_pmd_bond.so.2.1: cannot open shared object file: No such file or
> directory
> root@debian-10:~/dpdk-18.11/myinstall# export
> LD_LIBRARY_PATH=/root/dpdk-18.11/myinstall/share/dpdk/x86_64-native-linuxapp-gcc/lib
> root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 10-12 -w
> 82:00.0 -- --total-num-mbufs 1025
> EAL: Detected 24 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> testpmd: No probed ethernet devices
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176,
> socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176,
> socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> No commandline core given, start packet forwarding
> io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support
> enabled, MP allocation mode: native
>
> io packet forwarding packets/burst=32
> nb forwarding cores=1 - nb forwarding ports=0
> Press enter to exit
> ####################
>
> -Sindhu
>
> ------------------------------
> *From:* PATRICK KEROULAS <patrick.keroulas@radio-canada.ca>
> *Sent:* Monday, January 17, 2022 11:26:18 AM
> *To:* Sindhura Bandi
> *Cc:* users@dpdk.org; Venugopal Thacahappilly
> *Subject:* Re: Trouble bringing up dpdk testpmd with Mellanox ports
>
> Hello,
> Try without `--no-pci` in your testpmd command.
>
> On Sun, Jan 16, 2022 at 6:08 AM Sindhura Bandi
> <sindhura.bandi@certesnetworks.com> wrote:
> >
> > Hi,
> >
> >
> > I'm trying to bring up dpdk-testpmd application using Mellanox
> connectX-5 ports. With a custom built dpdk, testpmd is not able to detect
> the ports.
> >
> >
> > OS & Kernel:
> >
> > Linux debian-10 4.19.0-17-amd64 #1 SMP Debian 4.19.194-2 (2021-06-21)
> x86_64 GNU/Linux
> >
> > The steps followed:
> >
> > Installed MLNX_OFED_LINUX-4.9-4.0.8.0-debian10.0-x86_64
> (./mlnxofedinstall --skip-distro-check --upstream-libs --dpdk)
> > Downloaded dpdk-18.11 source, and built it after making following
> changes in config
> >
> > CONFIG_RTE_LIBRTE_MLX5_PMD=y
> > CONFIG_RTE_TEST_PMD_RECORD_CORE_CYCLES=y
> > CONFIG_RTE_BUILD_SHARED_LIB=y
> >
> > When I run testpmd, it is not recognizing any Mellanox ports
> >
> >
> > #########
> > root@debian-10:~/dpdk-18.11/myinstall# ./bin/testpmd -l 1-3 -w 82:00.0
> --no-pci -- --total-num-mbufs 1025
> > EAL: Detected 24 lcore(s)
> > EAL: Detected 2 NUMA nodes
> > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > EAL: No free hugepages reported in hugepages-1048576kB
> > EAL: Probing VFIO support...
> > testpmd: No probed ethernet devices
> > testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=1025, size=2176,
> socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mbuf_pool_socket_1>: n=1025, size=2176,
> socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > Done
> > No commandline core given, start packet forwarding
> > io packet forwarding - ports=0 - cores=0 - streams=0 - NUMA support
> enabled, MP allocation mode: native
> >
> > io packet forwarding packets/burst=32
> > nb forwarding cores=1 - nb forwarding ports=0
> > Press enter to exit
> > ##########
> >
> > root@debian-10:~# lspci | grep Mellanox
> > 82:00.0 Ethernet controller: Mellanox Technologies MT28800 Family
> [ConnectX-5 Ex]
> > 82:00.1 Ethernet controller: Mellanox Technologies MT28800 Family
> [ConnectX-5 Ex]
> > root@debian-10:~# ibv_devinfo
> > hca_id: mlx5_0
> > transport: InfiniBand (0)
> > fw_ver: 16.28.4512
> > node_guid: b8ce:f603:00f2:7952
> > sys_image_guid: b8ce:f603:00f2:7952
> > vendor_id: 0x02c9
> > vendor_part_id: 4121
> > hw_ver: 0x0
> > board_id: DEL0000000004
> > phys_port_cnt: 1
> > port: 1
> > state: PORT_ACTIVE (4)
> > max_mtu: 4096 (5)
> > active_mtu: 1024 (3)
> > sm_lid: 0
> > port_lid: 0
> > port_lmc: 0x00
> > link_layer: Ethernet
> >
> > hca_id: mlx5_1
> > transport: InfiniBand (0)
> > fw_ver: 16.28.4512
> > node_guid: b8ce:f603:00f2:7953
> > sys_image_guid: b8ce:f603:00f2:7952
> > vendor_id: 0x02c9
> > vendor_part_id: 4121
> > hw_ver: 0x0
> > board_id: DEL0000000004
> > phys_port_cnt: 1
> > port: 1
> > state: PORT_ACTIVE (4)
> > max_mtu: 4096 (5)
> > active_mtu: 1024 (3)
> > sm_lid: 0
> > port_lid: 0
> > port_lmc: 0x00
> > link_layer: Ethernet
> >
> >
> > I'm not sure where I'm going wrong. Any hints will be much appreciated.
> >
> > Thanks,
> > Sindhu
>
>
[-- Attachment #2: Type: text/html, Size: 8670 bytes --]
prev parent reply other threads:[~2022-01-27 12:38 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-12 13:28 Sindhura Bandi
2022-01-17 16:26 ` PATRICK KEROULAS
2022-01-24 17:43 ` Sindhura Bandi
2022-01-27 12:38 ` madhukar mythri [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAAUNki3D8ONVAJwy0YBBwKYLpByqvy6HFDbyCNPeOCE6FShaNg@mail.gmail.com \
--to=madhukar.mythri@gmail.com \
--cc=patrick.keroulas@radio-canada.ca \
--cc=sindhura.bandi@certesnetworks.com \
--cc=users@dpdk.org \
--cc=venugopal@certesnetworks.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).