* [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM
@ 2021-01-27 17:07 Das, Surajit
2021-01-28 8:25 ` Raslan Darawsheh
0 siblings, 1 reply; 3+ messages in thread
From: Das, Surajit @ 2021-01-27 17:07 UTC (permalink / raw)
To: users; +Cc: Dharwadkar, Sriram
Hi DPDK experts,
I am hitting an error when running the sample basicfwd app on Mellanox VFs.
The error is same with l2fwd, l2fwd-crypto and ipsec-secgw apps.
I have pushed the Mellanox VFs in a VM by PCI-Passthrough.
Pasting the command, output and the relevant version info below.
If anyone else has faced this, please do share what could fix this.
Command and Output:
[root@dpdk-dev-1 build]# ./basicfwd -l 1 -n 4 --pci-whitelist 0000:00:0d.0 --pci-whitelist 0000:00:0e.0 --log-level 8
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:0d.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15b3:1014 net_mlx5
net_mlx5: DV flow is not supported
net_mlx5: probe of PCI device 0000:00:0d.0 aborted after encountering an error: Operation not supported
EAL: Requested device 0000:00:0d.0 cannot be used
EAL: PCI device 0000:00:0e.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15b3:1014 net_mlx5
net_mlx5: DV flow is not supported
net_mlx5: probe of PCI device 0000:00:0e.0 aborted after encountering an error: Operation not supported
EAL: Requested device 0000:00:0e.0 cannot be used
EAL: Error - exiting with code: 1
Cause: Error: number of ports must be even
[root@dpdk-dev-1 build]#
Versions:
3.10.0-1062.el7.x86_64
CentOS Linux release 7.7.1908 (Core)
dpdk-stable-19.11.6
NIC detail:
00:0c.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700 Series (rev 09)
Subsystem: Super Micro Computer Inc Device 0000
Physical Slot: 12
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Region 0: Memory at fe220000 (64-bit, prefetchable) [size=128K]
Region 3: Memory at fe244000 (64-bit, prefetchable) [size=16K]
Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
Vector table: BAR=3 offset=00000000
PBA: BAR=3 offset=00002000
Capabilities: [a0] Express (v2) Endpoint, MSI 00
DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1 <64us
ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ SlotPowerLimit 0.000W
DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
MaxPayload 128 bytes, MaxReadReq 128 bytes
DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency L0s <64ns, L1 <1us
ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive- BWMgmt- ABWMgmt-
DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR-, OBFF Not Supported
DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1-
EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest-
Kernel driver in use: iavf
Kernel modules: iavf
Regards,
Surajit
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM
2021-01-27 17:07 [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM Das, Surajit
@ 2021-01-28 8:25 ` Raslan Darawsheh
2021-01-28 11:06 ` Das, Surajit
0 siblings, 1 reply; 3+ messages in thread
From: Raslan Darawsheh @ 2021-01-28 8:25 UTC (permalink / raw)
To: Das, Surajit, users; +Cc: Dharwadkar, Sriram
Hello Das,
Can you kindly try to add the following devarg to your pci-whitelist ?
--pci-whitelist 0000:00:0d.0,dv_flow_en=0 -- pci-whitelist 0000:00:0e.0,dv_flow_en=0
Kindest regards,
Raslan Darawsheh
> -----Original Message-----
> From: users <users-bounces@dpdk.org> On Behalf Of Das, Surajit
> Sent: Wednesday, January 27, 2021 7:08 PM
> To: users <users@dpdk.org>
> Cc: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
> Subject: [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV
> flow is not supported] on Mellanox VF in VM
>
> Hi DPDK experts,
>
> I am hitting an error when running the sample basicfwd app on Mellanox VFs.
> The error is same with l2fwd, l2fwd-crypto and ipsec-secgw apps.
>
> I have pushed the Mellanox VFs in a VM by PCI-Passthrough.
> Pasting the command, output and the relevant version info below.
> If anyone else has faced this, please do share what could fix this.
>
>
> Command and Output:
> [root@dpdk-dev-1 build]# ./basicfwd -l 1 -n 4 --pci-whitelist 0000:00:0d.0 --
> pci-whitelist 0000:00:0e.0 --log-level 8
> EAL: Detected 8 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device 0000:00:0d.0 on NUMA socket -1
> EAL: Invalid NUMA socket, default to 0
> EAL: probe driver: 15b3:1014 net_mlx5
> net_mlx5: DV flow is not supported
> net_mlx5: probe of PCI device 0000:00:0d.0 aborted after encountering an
> error: Operation not supported
> EAL: Requested device 0000:00:0d.0 cannot be used
> EAL: PCI device 0000:00:0e.0 on NUMA socket -1
> EAL: Invalid NUMA socket, default to 0
> EAL: probe driver: 15b3:1014 net_mlx5
> net_mlx5: DV flow is not supported
> net_mlx5: probe of PCI device 0000:00:0e.0 aborted after encountering an
> error: Operation not supported
> EAL: Requested device 0000:00:0e.0 cannot be used
> EAL: Error - exiting with code: 1
> Cause: Error: number of ports must be even
> [root@dpdk-dev-1 build]#
>
> Versions:
> 3.10.0-1062.el7.x86_64
> CentOS Linux release 7.7.1908 (Core)
> dpdk-stable-19.11.6
>
> NIC detail:
> 00:0c.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700
> Series (rev 09)
> Subsystem: Super Micro Computer Inc Device 0000
> Physical Slot: 12
> Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
> Latency: 0
> Region 0: Memory at fe220000 (64-bit, prefetchable) [size=128K]
> Region 3: Memory at fe244000 (64-bit, prefetchable) [size=16K]
> Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
> Vector table: BAR=3 offset=00000000
> PBA: BAR=3 offset=00002000
> Capabilities: [a0] Express (v2) Endpoint, MSI 00
> DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1
> <64us
> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
> SlotPowerLimit 0.000W
> DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
> RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
> MaxPayload 128 bytes, MaxReadReq 128 bytes
> DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr-
> TransPend-
> LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency
> L0s <64ns, L1 <1us
> ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
> LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
> ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
> LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive-
> BWMgmt- ABWMgmt-
> DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR-, OBFF
> Not Supported
> DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF
> Disabled
> LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-,
> EqualizationPhase1-
> EqualizationPhase2-, EqualizationPhase3-,
> LinkEqualizationRequest-
> Kernel driver in use: iavf
> Kernel modules: iavf
>
> Regards,
> Surajit
^ permalink raw reply [flat|nested] 3+ messages in thread
* Re: [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM
2021-01-28 8:25 ` Raslan Darawsheh
@ 2021-01-28 11:06 ` Das, Surajit
0 siblings, 0 replies; 3+ messages in thread
From: Das, Surajit @ 2021-01-28 11:06 UTC (permalink / raw)
To: Raslan Darawsheh, users; +Cc: Dharwadkar, Sriram
Hi Raslan,
If I try that option inside the VM where MLNX_OFED is not installed, I still hit an error, but not the same one.
On bare metal where I have MLNX_OFED installed, I do not encounter the error and the same binary runs fine.
In the VM, I checked for the presence of drivers, and found them to be present.
Driver Info VM:
[root@dpdk-dev-1 build]# lsmod | grep mlx
mlx5_ib 262895 0
ib_uverbs 102208 2 mlx5_ib,rdma_ucm
ib_core 255469 13 rdma_cm,ib_cm,iw_cm,rpcrdma,mlx5_ib,ib_srp,ib_iser,ib_srpt,ib_umad,ib_uverbs,rdma_ucm,ib_ipoib,ib_isert
mlx5_core 628569 1 mlx5_ib
mlxfw 18227 1 mlx5_core
devlink 60067 1 mlx5_core
ptp 19231 1 mlx5_core
[root@dpdk-dev-1 build]# modinfo mlx5_core | grep version
version: 5.0-0
rhelversion: 7.7
srcversion: 8657804ACAFA1FD34AFF1A7
vermagic: 3.10.0-1062.el7.x86_64 SMP mod_unload modversions
[root@dpdk-dev-1 build]# /root/dpdk-stable-19.11.6/usertools/dpdk-devbind.py --status | grep mlx
0000:00:0d.0 'MT27700 Family [ConnectX-4 Virtual Function] 1014' if=ens13 drv=mlx5_core unused=igb_uio,uio_pci_generic
0000:00:0e.0 'MT27700 Family [ConnectX-4 Virtual Function] 1014' if=ens14 drv=mlx5_core unused=igb_uio,uio_pci_generic
VM error output here:
[root@dpdk-dev-1 build]# ./basicfwd -l 1 -n 4 --pci-whitelist 0000:00:0d.0,dv_flow_en=0 --pci-whitelist 0000:00:0e.0,dv_flow_en=0 --log-level 8
EAL: Detected 8 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: No available hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:00:0d.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15b3:1014 net_mlx5
net_mlx5: mlx5.c:3422: mlx5_pci_probe(): probe of PCI device 0000:00:0d.0 aborted after encountering an error: Operation not supported
EAL: Requested device 0000:00:0d.0 cannot be used
EAL: PCI device 0000:00:0e.0 on NUMA socket -1
EAL: Invalid NUMA socket, default to 0
EAL: probe driver: 15b3:1014 net_mlx5
net_mlx5: mlx5.c:3422: mlx5_pci_probe(): probe of PCI device 0000:00:0e.0 aborted after encountering an error: Operation not supported
EAL: Requested device 0000:00:0e.0 cannot be used
EAL: Error - exiting with code: 1
Cause: Error: number of ports must be even
[root@dpdk-dev-1 build]#
If I try to the same binary on a bare metal where MLNX_OFED is installed, the binary runs fine.
Here:
[root@experimental build]# ./basicfwd -l 1 -n 4 --pci-whitelist 0000:65:01.0,dv_flow_en=0 --pci-whitelist 0000:65:01.1,dv_flow_en=0 --log-level 8
EAL: Detected 48 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:65:01.0 on NUMA socket 0
EAL: probe driver: 15b3:1014 net_mlx5
EAL: PCI device 0000:65:01.1 on NUMA socket 0
EAL: probe driver: 15b3:1014 net_mlx5
Port 0 MAC: 12 e4 97 76 bf af
Port 1 MAC: 56 f9 e2 3f c7 34
Drivers on bare metal:
[root@experimental ~]# lsmod | grep mlx
mlx5_ib 262797 0
ib_uverbs 102208 1 mlx5_ib
ib_core 255552 2 mlx5_ib,ib_uverbs
mlx5_core 638172 1 mlx5_ib
mlxfw 18227 1 mlx5_core
devlink 60067 1 mlx5_core
ptp 19231 2 i40e,mlx5_core
[root@experimental ~]# modinfo mlx5_core | grep version
version: 5.0-0
rhelversion: 7.8
srcversion: 347B715A7BFEF96508E57F2
vermagic: 3.10.0-1127.el7.x86_64 SMP mod_unload modversions
[root@experimental ~]# ./dpdk-devbind.py --status | grep "65:01.0\|65:01.1"
0000:65:01.0 'MT27700 Family [ConnectX-4 Virtual Function] 1014' if=enp101s1 drv=mlx5_core unused=vfio-pci
0000:65:01.1 'MT27700 Family [ConnectX-4 Virtual Function] 1014' if=enp101s1f1 drv=mlx5_core unused=vfio-pci
[root@experimental ~]#
Now, my question is, what is it that MLNX_OFED providing that I am missing?
Regards,
Surajit
From: Raslan Darawsheh <rasland@nvidia.com>
Sent: Thursday, January 28, 2021 1:56 PM
To: Das, Surajit <Surajit.Das@commscope.com>; users <users@dpdk.org>
Cc: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: RE: Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM
Hello Das, Can you kindly try to add the following devarg to your pci-whitelist ? --pci-whitelist 0000:00:0d.0,dv_flow_en=0 -- pci-whitelist 0000:00:0e.0,dv_flow_en=0 Kindest regards, Raslan Darawsheh
External (rasland@nvidia.com<mailto:rasland@nvidia.com>)
Report This Email<https://shared.outlook.inky.com/report?id=Y29tbXNjb3BlL3N1cmFqaXQuZGFzQGNvbW1zY29wZS5jb20vMzEzMzgyMzQ4NDk0MWU0YzhjOTEwN2UyMDUzYjFkZjkvMTYxMTgyMjM2My40OQ==#key=7cb49c853831773fdc4118bcf5249b2e> FAQ<https://www.inky.com/banner-faq/> Protection by INKY<https://www.inky.com>
Hello Das,
Can you kindly try to add the following devarg to your pci-whitelist ?
--pci-whitelist 0000:00:0d.0,dv_flow_en=0 -- pci-whitelist 0000:00:0e.0,dv_flow_en=0
Kindest regards,
Raslan Darawsheh
> -----Original Message-----
> From: users <users-bounces@dpdk.org<mailto:users-bounces@dpdk.org>> On Behalf Of Das, Surajit
> Sent: Wednesday, January 27, 2021 7:08 PM
> To: users <users@dpdk.org<mailto:users@dpdk.org>>
> Cc: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com<mailto:Sriram.Dharwadkar@commscope.com>>
> Subject: [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV
> flow is not supported] on Mellanox VF in VM
>
> Hi DPDK experts,
>
> I am hitting an error when running the sample basicfwd app on Mellanox VFs.
> The error is same with l2fwd, l2fwd-crypto and ipsec-secgw apps.
>
> I have pushed the Mellanox VFs in a VM by PCI-Passthrough.
> Pasting the command, output and the relevant version info below.
> If anyone else has faced this, please do share what could fix this.
>
>
> Command and Output:
> [root@dpdk-dev-1 build]# ./basicfwd -l 1 -n 4 --pci-whitelist 0000:00:0d.0 --
> pci-whitelist 0000:00:0e.0 --log-level 8
> EAL: Detected 8 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device 0000:00:0d.0 on NUMA socket -1
> EAL: Invalid NUMA socket, default to 0
> EAL: probe driver: 15b3:1014 net_mlx5
> net_mlx5: DV flow is not supported
> net_mlx5: probe of PCI device 0000:00:0d.0 aborted after encountering an
> error: Operation not supported
> EAL: Requested device 0000:00:0d.0 cannot be used
> EAL: PCI device 0000:00:0e.0 on NUMA socket -1
> EAL: Invalid NUMA socket, default to 0
> EAL: probe driver: 15b3:1014 net_mlx5
> net_mlx5: DV flow is not supported
> net_mlx5: probe of PCI device 0000:00:0e.0 aborted after encountering an
> error: Operation not supported
> EAL: Requested device 0000:00:0e.0 cannot be used
> EAL: Error - exiting with code: 1
> Cause: Error: number of ports must be even
> [root@dpdk-dev-1 build]#
>
> Versions:
> 3.10.0-1062.el7.x86_64
> CentOS Linux release 7.7.1908 (Core)
> dpdk-stable-19.11.6
>
> NIC detail:
> 00:0c.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700
> Series (rev 09)
> Subsystem: Super Micro Computer Inc Device 0000
> Physical Slot: 12
> Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
> Latency: 0
> Region 0: Memory at fe220000 (64-bit, prefetchable) [size=128K]
> Region 3: Memory at fe244000 (64-bit, prefetchable) [size=16K]
> Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
> Vector table: BAR=3 offset=00000000
> PBA: BAR=3 offset=00002000
> Capabilities: [a0] Express (v2) Endpoint, MSI 00
> DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1
> <64us
> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
> SlotPowerLimit 0.000W
> DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
> RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
> MaxPayload 128 bytes, MaxReadReq 128 bytes
> DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr-
> TransPend-
> LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency
> L0s <64ns, L1 <1us
> ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
> LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
> ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
> LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive-
> BWMgmt- ABWMgmt-
> DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR-, OBFF
> Not Supported
> DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF
> Disabled
> LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-,
> EqualizationPhase1-
> EqualizationPhase2-, EqualizationPhase3-,
> LinkEqualizationRequest-
> Kernel driver in use: iavf
> Kernel modules: iavf
>
> Regards,
> Surajit
^ permalink raw reply [flat|nested] 3+ messages in thread
end of thread, other threads:[~2021-01-28 11:06 UTC | newest]
Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-01-27 17:07 [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM Das, Surajit
2021-01-28 8:25 ` Raslan Darawsheh
2021-01-28 11:06 ` Das, Surajit
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).