From: Raslan Darawsheh <rasland@nvidia.com>
To: "Das, Surajit" <Surajit.Das@commscope.com>, users <users@dpdk.org>
Cc: "Dharwadkar, Sriram" <Sriram.Dharwadkar@commscope.com>
Subject: Re: [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV flow is not supported] on Mellanox VF in VM
Date: Thu, 28 Jan 2021 08:25:52 +0000 [thread overview]
Message-ID: <DM6PR12MB2748E6622FC19955C51DC1C7CFBA9@DM6PR12MB2748.namprd12.prod.outlook.com> (raw)
In-Reply-To: <SN6PR14MB22536A19F7E099C17F47819DE9BB9@SN6PR14MB2253.namprd14.prod.outlook.com>
Hello Das,
Can you kindly try to add the following devarg to your pci-whitelist ?
--pci-whitelist 0000:00:0d.0,dv_flow_en=0 -- pci-whitelist 0000:00:0e.0,dv_flow_en=0
Kindest regards,
Raslan Darawsheh
> -----Original Message-----
> From: users <users-bounces@dpdk.org> On Behalf Of Das, Surajit
> Sent: Wednesday, January 27, 2021 7:08 PM
> To: users <users@dpdk.org>
> Cc: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
> Subject: [dpdk-users] Skeleton basicfwd sample app hits error [net_mlx5: DV
> flow is not supported] on Mellanox VF in VM
>
> Hi DPDK experts,
>
> I am hitting an error when running the sample basicfwd app on Mellanox VFs.
> The error is same with l2fwd, l2fwd-crypto and ipsec-secgw apps.
>
> I have pushed the Mellanox VFs in a VM by PCI-Passthrough.
> Pasting the command, output and the relevant version info below.
> If anyone else has faced this, please do share what could fix this.
>
>
> Command and Output:
> [root@dpdk-dev-1 build]# ./basicfwd -l 1 -n 4 --pci-whitelist 0000:00:0d.0 --
> pci-whitelist 0000:00:0e.0 --log-level 8
> EAL: Detected 8 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: No available hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device 0000:00:0d.0 on NUMA socket -1
> EAL: Invalid NUMA socket, default to 0
> EAL: probe driver: 15b3:1014 net_mlx5
> net_mlx5: DV flow is not supported
> net_mlx5: probe of PCI device 0000:00:0d.0 aborted after encountering an
> error: Operation not supported
> EAL: Requested device 0000:00:0d.0 cannot be used
> EAL: PCI device 0000:00:0e.0 on NUMA socket -1
> EAL: Invalid NUMA socket, default to 0
> EAL: probe driver: 15b3:1014 net_mlx5
> net_mlx5: DV flow is not supported
> net_mlx5: probe of PCI device 0000:00:0e.0 aborted after encountering an
> error: Operation not supported
> EAL: Requested device 0000:00:0e.0 cannot be used
> EAL: Error - exiting with code: 1
> Cause: Error: number of ports must be even
> [root@dpdk-dev-1 build]#
>
> Versions:
> 3.10.0-1062.el7.x86_64
> CentOS Linux release 7.7.1908 (Core)
> dpdk-stable-19.11.6
>
> NIC detail:
> 00:0c.0 Ethernet controller: Intel Corporation Ethernet Virtual Function 700
> Series (rev 09)
> Subsystem: Super Micro Computer Inc Device 0000
> Physical Slot: 12
> Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop-
> ParErr- Stepping- SERR- FastB2B- DisINTx+
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> <TAbort- <MAbort- >SERR- <PERR- INTx-
> Latency: 0
> Region 0: Memory at fe220000 (64-bit, prefetchable) [size=128K]
> Region 3: Memory at fe244000 (64-bit, prefetchable) [size=16K]
> Capabilities: [70] MSI-X: Enable+ Count=5 Masked-
> Vector table: BAR=3 offset=00000000
> PBA: BAR=3 offset=00002000
> Capabilities: [a0] Express (v2) Endpoint, MSI 00
> DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <512ns, L1
> <64us
> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
> SlotPowerLimit 0.000W
> DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported-
> RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset-
> MaxPayload 128 bytes, MaxReadReq 128 bytes
> DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr-
> TransPend-
> LnkCap: Port #0, Speed 2.5GT/s, Width x1, ASPM L0s L1, Exit Latency
> L0s <64ns, L1 <1us
> ClockPM- Surprise- LLActRep- BwNot- ASPMOptComp+
> LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk-
> ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
> LnkSta: Speed unknown, Width x0, TrErr- Train- SlotClk- DLActive-
> BWMgmt- ABWMgmt-
> DevCap2: Completion Timeout: Range AB, TimeoutDis+, LTR-, OBFF
> Not Supported
> DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF
> Disabled
> LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-,
> EqualizationPhase1-
> EqualizationPhase2-, EqualizationPhase3-,
> LinkEqualizationRequest-
> Kernel driver in use: iavf
> Kernel modules: iavf
>
> Regards,
> Surajit
next prev parent reply other threads:[~2021-01-28 8:26 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-01-27 17:07 Das, Surajit
2021-01-28 8:25 ` Raslan Darawsheh [this message]
2021-01-28 11:06 ` Das, Surajit
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=DM6PR12MB2748E6622FC19955C51DC1C7CFBA9@DM6PR12MB2748.namprd12.prod.outlook.com \
--to=rasland@nvidia.com \
--cc=Sriram.Dharwadkar@commscope.com \
--cc=Surajit.Das@commscope.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).