From: Tal Shnaiderman <talshn@nvidia.com>
To: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>,
Ophir Munk <ophirmu@nvidia.com>,
Robert Hable <robert.hable@massresponse.com>
Cc: "users@dpdk.org" <users@dpdk.org>
Subject: RE: Windows examples failed to start using mellanox card
Date: Tue, 12 Jul 2022 10:23:54 +0000 [thread overview]
Message-ID: <MW4PR12MB566806DAC154E81C16CD04F9A4869@MW4PR12MB5668.namprd12.prod.outlook.com> (raw)
In-Reply-To: <MW4PR12MB5668DC7DDFBABCE64CC5D90CA4869@MW4PR12MB5668.namprd12.prod.outlook.com>
Small correction, please try with the value 0xFFFFFF.
> Subject: RE: Windows examples failed to start using mellanox card
>
> Hi Robert,
>
> Did you set the registry key (DevxFsRules) in the Windows registry?
>
> https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+Drive
> r+Registry+Keys
>
> If not, can you try setting it to the value (0xFFFFFFFF) and see it the issue still
> occurs after adapter restart?
>
> Thanks,
>
> Tal.
>
> > Subject: Re: Windows examples failed to start using mellanox card
> >
> > External email: Use caution opening links or attachments
> >
> >
> > Tal, Ophir, could you advise?
> >
> > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > Hello,
> > >
> > > I am having trouble running DPDK on Windows. I am trying to use the
> > example programs and testpmd, but they fail with some errors (see
> > outputs below). The testpmd program also does not go into interactive
> > mode and exits after a keypress.
> > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card
> > > using
> > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > I am using the current build (DPDK Version 22.07-rc3).
> > >
> > > I followed to DPDK Windows guide, but currently I always get the
> > > following
> > error:
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > >
> > > does anybody have an idea how to resolve this problem, or at least
> > > get
> > some more information why it failed?
> > > Helloworld output:
> > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > EAL: Detected CPU lcores: 24
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Multi-process support is requested, but not available.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > hello from core 1
> > > hello from core 2
> > > hello from core 3
> > > hello from core 4
> > > hello from core 5
> > > hello from core 6
> > > hello from core 7
> > > hello from core 8
> > > hello from core 16
> > > hello from core 22
> > > hello from core 11
> > > hello from core 12
> > > hello from core 13
> > > hello from core 14
> > > hello from core 15
> > > hello from core 9
> > > hello from core 17
> > > hello from core 18
> > > hello from core 19
> > > hello from core 20
> > > hello from core 21
> > > hello from core 23
> > > hello from core 0
> > > hello from core 10
> > >
> > > testpmd output:
> > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > EAL: Detected CPU lcores: 24
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Multi-process support is requested, but not available.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176,
> > > socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176,
> > > socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port
> > > 0 (socket 0)
> > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > Invalid argument Configuring Port 1 (socket 0)
> > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > Invalid argument Please stop the ports first Done No commandline
> > > core given, start packet forwarding Not all ports were started Press
> > > enter to exit
> > >
> > >
> > > Stopping port 0...
> > > Stopping ports...
> > > Done
> > >
> > > Stopping port 1...
> > > Stopping ports...
> > > Done
> > >
> > > Shutting down port 0...
> > > Closing ports...
> > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > Port
> > > 0 is closed Done
> > >
> > > Shutting down port 1...
> > > Closing ports...
> > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > Port
> > > 1 is closed Done
> > >
> > > Bye...
> > >
> > > Kind regards,
> > > Robert
next prev parent reply other threads:[~2022-07-12 10:23 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-07-11 14:10 Robert Hable
2022-07-12 8:27 ` Dmitry Kozlyuk
2022-07-12 8:43 ` Tal Shnaiderman
2022-07-12 10:23 ` Tal Shnaiderman [this message]
2022-07-12 14:03 ` AW: " Robert Hable
2022-07-12 14:35 ` Tal Shnaiderman
2022-07-13 7:57 ` AW: " Robert Hable
2022-07-26 14:14 ` Tal Shnaiderman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=MW4PR12MB566806DAC154E81C16CD04F9A4869@MW4PR12MB5668.namprd12.prod.outlook.com \
--to=talshn@nvidia.com \
--cc=dmitry.kozliuk@gmail.com \
--cc=ophirmu@nvidia.com \
--cc=robert.hable@massresponse.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).