DPDK usage discussions
 help / color / mirror / Atom feed
From: Pravein GK <praveingk@gmail.com>
To: Raslan Darawsheh <rasland@nvidia.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] common_mlx5: Failed to load driver = net_mlx5 Error
Date: Thu, 28 Jan 2021 14:32:29 +0530	[thread overview]
Message-ID: <CAE4=sSfn=w66jyoKgg3xbxhBFwBkTGs9MiTfW1iFtyZWnOkx_A@mail.gmail.com> (raw)
In-Reply-To: <DM6PR12MB2748B831BEC0F0ADE6D941D7CFBA9@DM6PR12MB2748.namprd12.prod.outlook.com>

I did not happen to see any errors in dmesg.

Please find the dmesg, and ibdev2netdev attached.

Thanks,
Pravein

On Thu, Jan 28, 2021 at 2:09 PM Raslan Darawsheh <rasland@nvidia.com> wrote:

> Hi Pravein,
>
>
>
> Do you happen to have any errors in dmesg output ?
>
> Also, can you post the output of ibdev2netdev ?
>
>
>
> Kindest regards,
>
> Raslan Darawsheh
>
>
>
> *From:* Pravein GK <praveingk@gmail.com>
> *Sent:* Thursday, January 28, 2021 10:36 AM
> *To:* Raslan Darawsheh <rasland@nvidia.com>
> *Cc:* users@dpdk.org
> *Subject:* Re: [dpdk-users] common_mlx5: Failed to load driver = net_mlx5
> Error
>
>
>
> Hi Raslan,
>
>
>
> Thanks for your reply.
>
>
>
> The lsmod o/p is below :
>
>
>
> $ lsmod | grep mlx5_
> mlx5_ib               364544  0
> ib_uverbs             131072  2 rdma_ucm,mlx5_ib
> mlx5_core            1294336  1 mlx5_ib
> mlxfw                  24576  1 mlx5_core
> psample                20480  1 mlx5_core
> devlink                45056  1 mlx5_core
> ib_core               311296  8
> rdma_cm,ib_ipoib,iw_cm,ib_umad,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm
> mlx_compat             45056  10
> rdma_cm,ib_ipoib,iw_cm,ib_umad,ib_core,rdma_ucm,ib_uverbs,mlx5_ib,ib_cm,mlx5_core
> ptp                    20480  1 mlx5_core
>
>
>
> I had tried the force restart, and even after that , I get the same error.
>
>
>
> Thanks,
>
> Pravein
>
>
>
>
>
>
>
> On Thu, Jan 28, 2021 at 1:52 PM Raslan Darawsheh <rasland@nvidia.com>
> wrote:
>
> Hi Parvein,
>
> Can you kindly confirm that the kernel drivers (mlx5_ib, mlx5_core) are
> loaded?
> (lsmod | grep mlx5_)
>
> If not please try to restart the kernel driver:
>
> /etc/init.d/openibd force-restart
>
> Kindest regards,
> Raslan Darawsheh
>
> > -----Original Message-----
> > From: users <users-bounces@dpdk.org> On Behalf Of Pravein GK
> > Sent: Thursday, January 28, 2021 7:49 AM
> > To: users@dpdk.org
> > Subject: [dpdk-users] common_mlx5: Failed to load driver = net_mlx5 Error
> >
> > Hello All,
> >
> > I am using DPDK 20.08 with Mellanox Connect X 5 NICs.
> > I have installed the latest OFED drivers with "--dpdk --upstream-libs".
> > However, while launching a dpdk application, I get the below error :
> >
> > EAL: Detected 24 lcore(s)
> > > EAL: Detected 2 NUMA nodes
> > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> > > EAL: Selected IOVA mode 'PA'
> > > EAL: No available hugepages reported in hugepages-1048576kB
> > > EAL: Probing VFIO support...
> > > EAL:   Invalid NUMA socket, default to 0
> > >
> > >
> > > *EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:1f:00.0
> (socket
> > > 0)net_mlx5: no Verbs device matches PCI device 0000:1f:00.0, are kernel
> > > drivers loaded?common_mlx5: Failed to load driver = net_mlx5.*
> > > EAL: Requested device 0000:1f:00.0 cannot be used
> > > EAL:   Invalid NUMA socket, default to 0
> > >
> > >
> > > *EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:1f:00.1
> (socket
> > > 0)net_mlx5: no Verbs device matches PCI device 0000:1f:00.1, are kernel
> > > drivers loaded?common_mlx5: Failed to load driver = net_mlx5.*
> > > EAL: Requested device 0000:1f:00.1 cannot be used
> > > EAL: No legacy callbacks, legacy socket not created
> > > EAL: Error - exiting with code: 1
> > >   Cause: Cannot create mbuf pool , error=Invalid argument
> >
> >
> > I am stuck with this error, and unable to proceed for a long time despite
> > trying most of the things suggested in forums. Please help.
> >
> > Thanks,
> > Pravein
>
>
-------------- next part --------------
 dmesg | grep mlx
[    5.520063] mlx_compat: loading out-of-tree module taints kernel.
[    5.539232] mlx_compat: loading out-of-tree module taints kernel.
[    5.582962] mlx_compat: module verification failed: signature and/or required key missing - tainting kernel
[    5.903950] mlx5_core 0000:1f:00.0: enabling device (0040 -> 0042)
[    5.941666] mlx5_core 0000:1f:00.0: firmware version: 16.29.1016
[    5.941692] mlx5_core 0000:1f:00.0: 64.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x16 link at 0000:00:07.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[    6.200500] mlx5_core 0000:1f:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[    6.200639] mlx5_core 0000:1f:00.0: E-Switch: Total vports 66, per vport: max uc(1024) max mc(16384)
[    6.206971] mlx5_core 0000:1f:00.0: Port module event: module 0, Cable plugged
[    6.207523] mlx5_core 0000:1f:00.0: mlx5_pcie_event:302:(pid 250): PCIe slot advertised sufficient power (27W).
[    6.215561] mlx5_core 0000:1f:00.0: mlx5_fw_tracer_start:815:(pid 336): FWTracer: Ownership granted and active
[    6.223151] mlx5_core 0000:1f:00.1: enabling device (0040 -> 0042)
[    6.223891] mlx5_core 0000:1f:00.1: firmware version: 16.29.1016
[    6.223953] mlx5_core 0000:1f:00.1: 64.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x16 link at 0000:00:07.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[    6.494022] mlx5_core 0000:1f:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[    6.494269] mlx5_core 0000:1f:00.1: E-Switch: Total vports 66, per vport: max uc(1024) max mc(16384)
[    6.502335] mlx5_core 0000:1f:00.1: Port module event: module 1, Cable plugged
[    6.502778] mlx5_core 0000:1f:00.1: mlx5_pcie_event:302:(pid 399): PCIe slot advertised sufficient power (27W).
[    6.519079] mlx5_core 0000:1f:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[    6.774159] mlx5_core 0000:1f:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295
[    6.790042] mlx5_core 0000:1f:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[    7.020323] mlx5_core 0000:1f:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295
[    7.037677] mlx5_core 0000:1f:00.1 ens3f1: renamed from eth1
[    7.084399] mlx5_core 0000:1f:00.0 ens3f0: renamed from eth0
[ 3204.024443] mlx5_core 0000:1f:00.1: mlx5_fw_tracer_start:815:(pid 450): FWTracer: Ownership granted and active
[ 3204.942979] mlx5_core 0000:1f:00.0: E-Switch: cleanup
[ 3209.991140] mlx5_core 0000:1f:00.1: E-Switch: cleanup
[ 4360.255008] mlx5_core 0000:1f:00.0: firmware version: 16.29.1016
[ 4360.255037] mlx5_core 0000:1f:00.0: 64.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x16 link at 0000:00:07.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[ 4360.509355] mlx5_core 0000:1f:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[ 4360.509473] mlx5_core 0000:1f:00.0: E-Switch: Total vports 66, per vport: max uc(1024) max mc(16384)
[ 4360.517582] mlx5_core 0000:1f:00.0: Port module event: module 0, Cable plugged
[ 4360.518581] mlx5_core 0000:1f:00.0: mlx5_pcie_event:302:(pid 450): PCIe slot advertised sufficient power (27W).
[ 4360.526609] mlx5_core 0000:1f:00.0: mlx5_fw_tracer_start:815:(pid 2646): FWTracer: Ownership granted and active
[ 4360.534803] mlx5_core 0000:1f:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[ 4360.765688] mlx5_core 0000:1f:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295
[ 4360.779786] mlx5_core 0000:1f:00.0 ens3f0: renamed from eth0
[ 4362.947189] mlx5_core 0000:1f:00.1: firmware version: 16.29.1016
[ 4362.947221] mlx5_core 0000:1f:00.1: 64.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x16 link at 0000:00:07.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[ 4363.214633] mlx5_core 0000:1f:00.1: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[ 4363.214754] mlx5_core 0000:1f:00.1: E-Switch: Total vports 66, per vport: max uc(1024) max mc(16384)
[ 4363.223401] mlx5_core 0000:1f:00.1: Port module event: module 1, Cable plugged
[ 4363.223695] mlx5_core 0000:1f:00.1: mlx5_pcie_event:302:(pid 2720): PCIe slot advertised sufficient power (27W).
[ 4363.237643] mlx5_core 0000:1f:00.1: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[ 4363.466358] mlx5_core 0000:1f:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295
[ 4363.480310] mlx5_core 0000:1f:00.1 ens3f1: renamed from eth0
[ 4376.198303] mlx5_core 0000:1f:00.1: mlx5_fw_tracer_start:815:(pid 5): FWTracer: Ownership granted and active
[ 4377.248797] mlx5_core 0000:1f:00.0: E-Switch: cleanup
[ 4381.736724] mlx5_core 0000:1f:00.1: E-Switch: cleanup
[ 5127.246120] mlx5_core 0000:1f:00.0: firmware version: 16.29.1016
[ 5127.246148] mlx5_core 0000:1f:00.0: 64.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x16 link at 0000:00:07.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[ 5127.500888] mlx5_core 0000:1f:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[ 5127.501005] mlx5_core 0000:1f:00.0: E-Switch: Total vports 66, per vport: max uc(1024) max mc(16384)
[ 5127.508938] mlx5_core 0000:1f:00.0: Port module event: module 0, Cable plugged
[ 5127.509728] mlx5_core 0000:1f:00.0: mlx5_pcie_event:302:(pid 5): PCIe slot advertised sufficient power (27W).
[ 5127.517387] mlx5_core 0000:1f:00.0: mlx5_fw_tracer_start:815:(pid 29725): FWTracer: Ownership granted and active
[ 5127.524704] mlx5_core 0000:1f:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[ 5127.746557] mlx5_core 0000:1f:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295
[ 5127.762118] mlx5_core 0000:1f:00.0 ens3f0: renamed from eth0
[16520.222993] mlx5_core 0000:1f:00.0: E-Switch: cleanup
[16524.824892] mlx5_core 0000:1f:00.0: firmware version: 16.29.1016
[16524.824917] mlx5_core 0000:1f:00.0: 64.000 Gb/s available PCIe bandwidth, limited by 5 GT/s x16 link at 0000:00:07.0 (capable of 252.048 Gb/s with 16 GT/s x16 link)
[16525.075910] mlx5_core 0000:1f:00.0: Rate limit: 127 rates are supported, range: 0Mbps to 97656Mbps
[16525.076021] mlx5_core 0000:1f:00.0: E-Switch: Total vports 66, per vport: max uc(1024) max mc(16384)
[16525.083206] mlx5_core 0000:1f:00.0: Port module event: module 0, Cable plugged
[16525.083542] mlx5_core 0000:1f:00.0: mlx5_pcie_event:302:(pid 5): PCIe slot advertised sufficient power (27W).
[16525.091017] mlx5_core 0000:1f:00.0: mlx5_fw_tracer_start:815:(pid 2850): FWTracer: Ownership granted and active
[16525.099705] mlx5_core 0000:1f:00.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0)
[16525.338241] mlx5_core 0000:1f:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295
[16525.353781] mlx5_core 0000:1f:00.0 ens3f0: renamed from eth0
[16547.184865] mlx5_core 0000:1f:00.0 ens3f0: Link down
[16557.216630] mlx5_core 0000:1f:00.0 ens3f0: Link up

$ ibdev2netdev
mlx5_0 port 1 ==> ens3f0 (Up)


  reply	other threads:[~2021-01-28  9:03 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-28  5:49 Pravein GK
2021-01-28  8:22 ` Raslan Darawsheh
2021-01-28  8:36   ` Pravein GK
2021-01-28  8:39     ` Raslan Darawsheh
2021-01-28  9:02       ` Pravein GK [this message]
2021-02-02  7:50         ` Pravein GK
2021-02-02  8:29           ` Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAE4=sSfn=w66jyoKgg3xbxhBFwBkTGs9MiTfW1iFtyZWnOkx_A@mail.gmail.com' \
    --to=praveingk@gmail.com \
    --cc=rasland@nvidia.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).