DPDK usage discussions
 help / color / mirror / Atom feed
From: Tal Shnaiderman <talshn@nvidia.com>
To: Robert Hable <robert.hable@massresponse.com>,
	Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
	Ophir Munk <ophirmu@nvidia.com>
Cc: "users@dpdk.org" <users@dpdk.org>,
	Adham Masarwah <adham@nvidia.com>,
	Idan Hackmon <idanhac@nvidia.com>,
	Eilon Greenstein <eilong@nvidia.com>
Subject: RE: Windows examples failed to start using mellanox card
Date: Tue, 12 Jul 2022 14:35:34 +0000	[thread overview]
Message-ID: <MW4PR12MB56684E6190D6C3718D45FB02A4869@MW4PR12MB5668.namprd12.prod.outlook.com> (raw)
In-Reply-To: <PAXP190MB17415BDAB91BFB5D03E5A102E1869@PAXP190MB1741.EURP190.PROD.OUTLOOK.COM>

Hi Robert,

I'm glad to hear the issue is resolved, but I still want to understand why you get the error print as I'm not able to reproduce it on my side.

Are you using a Virtual Machine or is it a back-to-back setup? Can you also state the WINOF2 FW version you're using?

Thanks,

Tal.

> -----Original Message-----
> From: Robert Hable <robert.hable@massresponse.com>
> Sent: Tuesday, July 12, 2022 5:03 PM
> To: Tal Shnaiderman <talshn@nvidia.com>; Dmitry Kozlyuk
> <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
> Cc: users@dpdk.org
> Subject: AW: Windows examples failed to start using mellanox card
> 
> External email: Use caution opening links or attachments
> 
> 
> Hello,
> 
> thank you for your help!
> Initially i did set DevxFsRules to 0. After setting it to 0xFFFFFF it now works.
> I still get the following error but testpmd seems to work for me now as it is
> sending and receiving packets according to the task-manager.
>         mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> status=0 syndrome=0
> 
> So currently my issue seems to be solved.
> 
> Kind regards,
> Robert
> 
> -----Ursprüngliche Nachricht-----
> Von: Tal Shnaiderman <talshn@nvidia.com>
> Gesendet: Dienstag, 12. Juli 2022 12:24
> An: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Ophir Munk
> <ophirmu@nvidia.com>; Robert Hable <robert.hable@massresponse.com>
> Cc: users@dpdk.org
> Betreff: RE: Windows examples failed to start using mellanox card
> 
> Small correction, please try with the value 0xFFFFFF.
> 
> > Subject: RE: Windows examples failed to start using mellanox card
> >
> > Hi Robert,
> >
> > Did you set the registry key (DevxFsRules) in the Windows registry?
> >
> > https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+
> > Drive
> > r+Registry+Keys
> >
> > If not, can you try setting it to the value (0xFFFFFFFF) and see it
> > the issue still occurs after adapter restart?
> >
> > Thanks,
> >
> > Tal.
> >
> > > Subject: Re: Windows examples failed to start using mellanox card
> > >
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > Tal, Ophir, could you advise?
> > >
> > > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > > Hello,
> > > >
> > > > I am having trouble running DPDK on Windows. I am trying to use
> > > > the
> > > example programs and testpmd, but they fail with some errors (see
> > > outputs below). The testpmd program also does not go into
> > > interactive mode and exits after a keypress.
> > > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card
> > > > using
> > > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > > I am using the current build (DPDK Version 22.07-rc3).
> > > >
> > > > I followed to DPDK Windows guide, but currently I always get the
> > > > following
> > > error:
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > >
> > > > does anybody have an idea how to resolve this problem, or at least
> > > > get
> > > some more information why it failed?
> > > > Helloworld output:
> > > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > > EAL: Detected CPU lcores: 24
> > > > EAL: Detected NUMA nodes: 2
> > > > EAL: Multi-process support is requested, but not available.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > hello from core 1
> > > > hello from core 2
> > > > hello from core 3
> > > > hello from core 4
> > > > hello from core 5
> > > > hello from core 6
> > > > hello from core 7
> > > > hello from core 8
> > > > hello from core 16
> > > > hello from core 22
> > > > hello from core 11
> > > > hello from core 12
> > > > hello from core 13
> > > > hello from core 14
> > > > hello from core 15
> > > > hello from core 9
> > > > hello from core 17
> > > > hello from core 18
> > > > hello from core 19
> > > > hello from core 20
> > > > hello from core 21
> > > > hello from core 23
> > > > hello from core 0
> > > > hello from core 10
> > > >
> > > > testpmd output:
> > > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > > EAL: Detected CPU lcores: 24
> > > > EAL: Detected NUMA nodes: 2
> > > > EAL: Multi-process support is requested, but not available.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176,
> > > > socket=0
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176,
> > > > socket=1
> > > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring
> > > > Port
> > > > 0 (socket 0)
> > > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > > Invalid argument Configuring Port 1 (socket 0)
> > > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > > Invalid argument Please stop the ports first Done No commandline
> > > > core given, start packet forwarding Not all ports were started
> > > > Press enter to exit
> > > >
> > > >
> > > > Stopping port 0...
> > > > Stopping ports...
> > > > Done
> > > >
> > > > Stopping port 1...
> > > > Stopping ports...
> > > > Done
> > > >
> > > > Shutting down port 0...
> > > > Closing ports...
> > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > > Port
> > > > 0 is closed Done
> > > >
> > > > Shutting down port 1...
> > > > Closing ports...
> > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > > Port
> > > > 1 is closed Done
> > > >
> > > > Bye...
> > > >
> > > > Kind regards,
> > > > Robert


  reply	other threads:[~2022-07-12 14:35 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-11 14:10 Robert Hable
2022-07-12  8:27 ` Dmitry Kozlyuk
2022-07-12  8:43   ` Tal Shnaiderman
2022-07-12 10:23     ` Tal Shnaiderman
2022-07-12 14:03       ` AW: " Robert Hable
2022-07-12 14:35         ` Tal Shnaiderman [this message]
2022-07-13  7:57           ` Robert Hable
2022-07-26 14:14             ` Tal Shnaiderman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=MW4PR12MB56684E6190D6C3718D45FB02A4869@MW4PR12MB5668.namprd12.prod.outlook.com \
    --to=talshn@nvidia.com \
    --cc=adham@nvidia.com \
    --cc=dmitry.kozliuk@gmail.com \
    --cc=eilong@nvidia.com \
    --cc=idanhac@nvidia.com \
    --cc=ophirmu@nvidia.com \
    --cc=robert.hable@massresponse.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).