DPDK usage discussions
 help / color / mirror / Atom feed
* Windows examples failed to start using mellanox card
@ 2022-07-11 14:10 Robert Hable
  2022-07-12  8:27 ` Dmitry Kozlyuk
  0 siblings, 1 reply; 8+ messages in thread
From: Robert Hable @ 2022-07-11 14:10 UTC (permalink / raw)
  To: users

[-- Attachment #1: Type: text/plain, Size: 3677 bytes --]

Hello,

I am having trouble running DPDK on Windows. I am trying to use the example programs and testpmd, but they fail with some errors (see outputs below). The testpmd program also does not go into interactive mode and exits after a keypress.
I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card using Win-OF 2 2.90.50010 / SDK 2.90.25518.
I am using the current build (DPDK Version 22.07-rc3).

I followed to DPDK Windows guide, but currently I always get the following error:
mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0

does anybody have an idea how to resolve this problem, or at least get some more information why it failed?
Helloworld output:
C:\dpdk\build\examples>dpdk-helloworld.exe
EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 2
EAL: Multi-process support is requested, but not available.
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 (socket 0)
mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
mlx5_net: Rx CQE 128B compression is not supported.
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 (socket 0)
mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
mlx5_net: Rx CQE 128B compression is not supported.
hello from core 1
hello from core 2
hello from core 3
hello from core 4
hello from core 5
hello from core 6
hello from core 7
hello from core 8
hello from core 16
hello from core 22
hello from core 11
hello from core 12
hello from core 13
hello from core 14
hello from core 15
hello from core 9
hello from core 17
hello from core 18
hello from core 19
hello from core 20
hello from core 21
hello from core 23
hello from core 0
hello from core 10

testpmd output:
C:\dpdk\build\app>dpdk-testpmd.exe
EAL: Detected CPU lcores: 24
EAL: Detected NUMA nodes: 2
EAL: Multi-process support is requested, but not available.
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 (socket 0)
mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
mlx5_net: Rx CQE 128B compression is not supported.
EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 (socket 0)
mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
mlx5_net: Rx CQE 128B compression is not supported.
testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
mlx5_net: port 0 failed to set defaults flows
Fail to start port 0: Invalid argument
Configuring Port 1 (socket 0)
mlx5_net: port 1 failed to set defaults flows
Fail to start port 1: Invalid argument
Please stop the ports first
Done
No commandline core given, start packet forwarding
Not all ports were started
Press enter to exit


Stopping port 0...
Stopping ports...
Done

Stopping port 1...
Stopping ports...
Done

Shutting down port 0...
Closing ports...
mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
Port 0 is closed
Done

Shutting down port 1...
Closing ports...
mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
Port 1 is closed
Done

Bye...

Kind regards,
Robert

[-- Attachment #2: Type: text/html, Size: 22834 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: Windows examples failed to start using mellanox card
  2022-07-11 14:10 Windows examples failed to start using mellanox card Robert Hable
@ 2022-07-12  8:27 ` Dmitry Kozlyuk
  2022-07-12  8:43   ` Tal Shnaiderman
  0 siblings, 1 reply; 8+ messages in thread
From: Dmitry Kozlyuk @ 2022-07-12  8:27 UTC (permalink / raw)
  To: Tal Shnaiderman, Ophir Munk; +Cc: Robert Hable, users

Tal, Ophir, could you advise?

2022-07-11 14:10 (UTC+0000), Robert Hable:
> Hello,
> 
> I am having trouble running DPDK on Windows. I am trying to use the example programs and testpmd, but they fail with some errors (see outputs below). The testpmd program also does not go into interactive mode and exits after a keypress.
> I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card using Win-OF 2 2.90.50010 / SDK 2.90.25518.
> I am using the current build (DPDK Version 22.07-rc3).
> 
> I followed to DPDK Windows guide, but currently I always get the following error:
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> 
> does anybody have an idea how to resolve this problem, or at least get some more information why it failed?
> Helloworld output:
> C:\dpdk\build\examples>dpdk-helloworld.exe  
> EAL: Detected CPU lcores: 24
> EAL: Detected NUMA nodes: 2
> EAL: Multi-process support is requested, but not available.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> hello from core 1
> hello from core 2
> hello from core 3
> hello from core 4
> hello from core 5
> hello from core 6
> hello from core 7
> hello from core 8
> hello from core 16
> hello from core 22
> hello from core 11
> hello from core 12
> hello from core 13
> hello from core 14
> hello from core 15
> hello from core 9
> hello from core 17
> hello from core 18
> hello from core 19
> hello from core 20
> hello from core 21
> hello from core 23
> hello from core 0
> hello from core 10
> 
> testpmd output:
> C:\dpdk\build\app>dpdk-testpmd.exe  
> EAL: Detected CPU lcores: 24
> EAL: Detected NUMA nodes: 2
> EAL: Multi-process support is requested, but not available.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> mlx5_net: port 0 failed to set defaults flows
> Fail to start port 0: Invalid argument
> Configuring Port 1 (socket 0)
> mlx5_net: port 1 failed to set defaults flows
> Fail to start port 1: Invalid argument
> Please stop the ports first
> Done
> No commandline core given, start packet forwarding
> Not all ports were started
> Press enter to exit
> 
> 
> Stopping port 0...
> Stopping ports...
> Done
> 
> Stopping port 1...
> Stopping ports...
> Done
> 
> Shutting down port 0...
> Closing ports...
> mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> Port 0 is closed
> Done
> 
> Shutting down port 1...
> Closing ports...
> mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> Port 1 is closed
> Done
> 
> Bye...
> 
> Kind regards,
> Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Windows examples failed to start using mellanox card
  2022-07-12  8:27 ` Dmitry Kozlyuk
@ 2022-07-12  8:43   ` Tal Shnaiderman
  2022-07-12 10:23     ` Tal Shnaiderman
  0 siblings, 1 reply; 8+ messages in thread
From: Tal Shnaiderman @ 2022-07-12  8:43 UTC (permalink / raw)
  To: Dmitry Kozlyuk, Ophir Munk, Robert Hable; +Cc: users

Hi Robert,

Did you set the registry key (DevxFsRules) in the Windows registry?

https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+Driver+Registry+Keys

If not, can you try setting it to the value (0xFFFFFFFF) and see it the issue still occurs after adapter restart?

Thanks,

Tal.

> Subject: Re: Windows examples failed to start using mellanox card
> 
> External email: Use caution opening links or attachments
> 
> 
> Tal, Ophir, could you advise?
> 
> 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > Hello,
> >
> > I am having trouble running DPDK on Windows. I am trying to use the
> example programs and testpmd, but they fail with some errors (see outputs
> below). The testpmd program also does not go into interactive mode and exits
> after a keypress.
> > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card using
> Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > I am using the current build (DPDK Version 22.07-rc3).
> >
> > I followed to DPDK Windows guide, but currently I always get the following
> error:
> > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > status=0 syndrome=0
> >
> > does anybody have an idea how to resolve this problem, or at least get
> some more information why it failed?
> > Helloworld output:
> > C:\dpdk\build\examples>dpdk-helloworld.exe
> > EAL: Detected CPU lcores: 24
> > EAL: Detected NUMA nodes: 2
> > EAL: Multi-process support is requested, but not available.
> > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > (socket 0)
> > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > status=0 syndrome=0
> > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > mlx5_net: Rx CQE 128B compression is not supported.
> > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > (socket 0)
> > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > status=0 syndrome=0
> > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > mlx5_net: Rx CQE 128B compression is not supported.
> > hello from core 1
> > hello from core 2
> > hello from core 3
> > hello from core 4
> > hello from core 5
> > hello from core 6
> > hello from core 7
> > hello from core 8
> > hello from core 16
> > hello from core 22
> > hello from core 11
> > hello from core 12
> > hello from core 13
> > hello from core 14
> > hello from core 15
> > hello from core 9
> > hello from core 17
> > hello from core 18
> > hello from core 19
> > hello from core 20
> > hello from core 21
> > hello from core 23
> > hello from core 0
> > hello from core 10
> >
> > testpmd output:
> > C:\dpdk\build\app>dpdk-testpmd.exe
> > EAL: Detected CPU lcores: 24
> > EAL: Detected NUMA nodes: 2
> > EAL: Multi-process support is requested, but not available.
> > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > (socket 0)
> > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > status=0 syndrome=0
> > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > mlx5_net: Rx CQE 128B compression is not supported.
> > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > (socket 0)
> > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > status=0 syndrome=0
> > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > mlx5_net: Rx CQE 128B compression is not supported.
> > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176,
> > socket=0
> > testpmd: preferred mempool ops selected: ring_mp_mc
> > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176,
> > socket=1
> > testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> > (socket 0)
> > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > Invalid argument Configuring Port 1 (socket 0)
> > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > Invalid argument Please stop the ports first Done No commandline core
> > given, start packet forwarding Not all ports were started Press enter
> > to exit
> >
> >
> > Stopping port 0...
> > Stopping ports...
> > Done
> >
> > Stopping port 1...
> > Stopping ports...
> > Done
> >
> > Shutting down port 0...
> > Closing ports...
> > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported Port
> > 0 is closed Done
> >
> > Shutting down port 1...
> > Closing ports...
> > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported Port
> > 1 is closed Done
> >
> > Bye...
> >
> > Kind regards,
> > Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Windows examples failed to start using mellanox card
  2022-07-12  8:43   ` Tal Shnaiderman
@ 2022-07-12 10:23     ` Tal Shnaiderman
  2022-07-12 14:03       ` AW: " Robert Hable
  0 siblings, 1 reply; 8+ messages in thread
From: Tal Shnaiderman @ 2022-07-12 10:23 UTC (permalink / raw)
  To: Dmitry Kozlyuk, Ophir Munk, Robert Hable; +Cc: users

Small correction, please try with the value 0xFFFFFF.

> Subject: RE: Windows examples failed to start using mellanox card
> 
> Hi Robert,
> 
> Did you set the registry key (DevxFsRules) in the Windows registry?
> 
> https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+Drive
> r+Registry+Keys
> 
> If not, can you try setting it to the value (0xFFFFFFFF) and see it the issue still
> occurs after adapter restart?
> 
> Thanks,
> 
> Tal.
> 
> > Subject: Re: Windows examples failed to start using mellanox card
> >
> > External email: Use caution opening links or attachments
> >
> >
> > Tal, Ophir, could you advise?
> >
> > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > Hello,
> > >
> > > I am having trouble running DPDK on Windows. I am trying to use the
> > example programs and testpmd, but they fail with some errors (see
> > outputs below). The testpmd program also does not go into interactive
> > mode and exits after a keypress.
> > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card
> > > using
> > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > I am using the current build (DPDK Version 22.07-rc3).
> > >
> > > I followed to DPDK Windows guide, but currently I always get the
> > > following
> > error:
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > >
> > > does anybody have an idea how to resolve this problem, or at least
> > > get
> > some more information why it failed?
> > > Helloworld output:
> > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > EAL: Detected CPU lcores: 24
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Multi-process support is requested, but not available.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > hello from core 1
> > > hello from core 2
> > > hello from core 3
> > > hello from core 4
> > > hello from core 5
> > > hello from core 6
> > > hello from core 7
> > > hello from core 8
> > > hello from core 16
> > > hello from core 22
> > > hello from core 11
> > > hello from core 12
> > > hello from core 13
> > > hello from core 14
> > > hello from core 15
> > > hello from core 9
> > > hello from core 17
> > > hello from core 18
> > > hello from core 19
> > > hello from core 20
> > > hello from core 21
> > > hello from core 23
> > > hello from core 0
> > > hello from core 10
> > >
> > > testpmd output:
> > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > EAL: Detected CPU lcores: 24
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Multi-process support is requested, but not available.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176,
> > > socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176,
> > > socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port
> > > 0 (socket 0)
> > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > Invalid argument Configuring Port 1 (socket 0)
> > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > Invalid argument Please stop the ports first Done No commandline
> > > core given, start packet forwarding Not all ports were started Press
> > > enter to exit
> > >
> > >
> > > Stopping port 0...
> > > Stopping ports...
> > > Done
> > >
> > > Stopping port 1...
> > > Stopping ports...
> > > Done
> > >
> > > Shutting down port 0...
> > > Closing ports...
> > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > Port
> > > 0 is closed Done
> > >
> > > Shutting down port 1...
> > > Closing ports...
> > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > Port
> > > 1 is closed Done
> > >
> > > Bye...
> > >
> > > Kind regards,
> > > Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

* AW: Windows examples failed to start using mellanox card
  2022-07-12 10:23     ` Tal Shnaiderman
@ 2022-07-12 14:03       ` Robert Hable
  2022-07-12 14:35         ` Tal Shnaiderman
  0 siblings, 1 reply; 8+ messages in thread
From: Robert Hable @ 2022-07-12 14:03 UTC (permalink / raw)
  To: Tal Shnaiderman, Dmitry Kozlyuk, Ophir Munk; +Cc: users

Hello,

thank you for your help! 
Initially i did set DevxFsRules to 0. After setting it to 0xFFFFFF it now works.
I still get the following error but testpmd seems to work for me now as it is sending and receiving packets according to the task-manager.
	mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0

So currently my issue seems to be solved.

Kind regards,
Robert

-----Ursprüngliche Nachricht-----
Von: Tal Shnaiderman <talshn@nvidia.com> 
Gesendet: Dienstag, 12. Juli 2022 12:24
An: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>; Robert Hable <robert.hable@massresponse.com>
Cc: users@dpdk.org
Betreff: RE: Windows examples failed to start using mellanox card

Small correction, please try with the value 0xFFFFFF.

> Subject: RE: Windows examples failed to start using mellanox card
> 
> Hi Robert,
> 
> Did you set the registry key (DevxFsRules) in the Windows registry?
> 
> https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+
> Drive
> r+Registry+Keys
> 
> If not, can you try setting it to the value (0xFFFFFFFF) and see it 
> the issue still occurs after adapter restart?
> 
> Thanks,
> 
> Tal.
> 
> > Subject: Re: Windows examples failed to start using mellanox card
> >
> > External email: Use caution opening links or attachments
> >
> >
> > Tal, Ophir, could you advise?
> >
> > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > Hello,
> > >
> > > I am having trouble running DPDK on Windows. I am trying to use 
> > > the
> > example programs and testpmd, but they fail with some errors (see 
> > outputs below). The testpmd program also does not go into 
> > interactive mode and exits after a keypress.
> > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card 
> > > using
> > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > I am using the current build (DPDK Version 22.07-rc3).
> > >
> > > I followed to DPDK Windows guide, but currently I always get the 
> > > following
> > error:
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > >
> > > does anybody have an idea how to resolve this problem, or at least 
> > > get
> > some more information why it failed?
> > > Helloworld output:
> > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > EAL: Detected CPU lcores: 24
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Multi-process support is requested, but not available.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > hello from core 1
> > > hello from core 2
> > > hello from core 3
> > > hello from core 4
> > > hello from core 5
> > > hello from core 6
> > > hello from core 7
> > > hello from core 8
> > > hello from core 16
> > > hello from core 22
> > > hello from core 11
> > > hello from core 12
> > > hello from core 13
> > > hello from core 14
> > > hello from core 15
> > > hello from core 9
> > > hello from core 17
> > > hello from core 18
> > > hello from core 19
> > > hello from core 20
> > > hello from core 21
> > > hello from core 23
> > > hello from core 0
> > > hello from core 10
> > >
> > > testpmd output:
> > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > EAL: Detected CPU lcores: 24
> > > EAL: Detected NUMA nodes: 2
> > > EAL: Multi-process support is requested, but not available.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 
> > > (socket 0)
> > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > status=0 syndrome=0
> > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > mlx5_net: Rx CQE 128B compression is not supported.
> > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176,
> > > socket=0
> > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176,
> > > socket=1
> > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring 
> > > Port
> > > 0 (socket 0)
> > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > Invalid argument Configuring Port 1 (socket 0)
> > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > Invalid argument Please stop the ports first Done No commandline 
> > > core given, start packet forwarding Not all ports were started 
> > > Press enter to exit
> > >
> > >
> > > Stopping port 0...
> > > Stopping ports...
> > > Done
> > >
> > > Stopping port 1...
> > > Stopping ports...
> > > Done
> > >
> > > Shutting down port 0...
> > > Closing ports...
> > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported 
> > > Port
> > > 0 is closed Done
> > >
> > > Shutting down port 1...
> > > Closing ports...
> > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported 
> > > Port
> > > 1 is closed Done
> > >
> > > Bye...
> > >
> > > Kind regards,
> > > Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Windows examples failed to start using mellanox card
  2022-07-12 14:03       ` AW: " Robert Hable
@ 2022-07-12 14:35         ` Tal Shnaiderman
  2022-07-13  7:57           ` AW: " Robert Hable
  0 siblings, 1 reply; 8+ messages in thread
From: Tal Shnaiderman @ 2022-07-12 14:35 UTC (permalink / raw)
  To: Robert Hable, Dmitry Kozlyuk, Ophir Munk
  Cc: users, Adham Masarwah, Idan Hackmon, Eilon Greenstein

Hi Robert,

I'm glad to hear the issue is resolved, but I still want to understand why you get the error print as I'm not able to reproduce it on my side.

Are you using a Virtual Machine or is it a back-to-back setup? Can you also state the WINOF2 FW version you're using?

Thanks,

Tal.

> -----Original Message-----
> From: Robert Hable <robert.hable@massresponse.com>
> Sent: Tuesday, July 12, 2022 5:03 PM
> To: Tal Shnaiderman <talshn@nvidia.com>; Dmitry Kozlyuk
> <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
> Cc: users@dpdk.org
> Subject: AW: Windows examples failed to start using mellanox card
> 
> External email: Use caution opening links or attachments
> 
> 
> Hello,
> 
> thank you for your help!
> Initially i did set DevxFsRules to 0. After setting it to 0xFFFFFF it now works.
> I still get the following error but testpmd seems to work for me now as it is
> sending and receiving packets according to the task-manager.
>         mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> status=0 syndrome=0
> 
> So currently my issue seems to be solved.
> 
> Kind regards,
> Robert
> 
> -----Ursprüngliche Nachricht-----
> Von: Tal Shnaiderman <talshn@nvidia.com>
> Gesendet: Dienstag, 12. Juli 2022 12:24
> An: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Ophir Munk
> <ophirmu@nvidia.com>; Robert Hable <robert.hable@massresponse.com>
> Cc: users@dpdk.org
> Betreff: RE: Windows examples failed to start using mellanox card
> 
> Small correction, please try with the value 0xFFFFFF.
> 
> > Subject: RE: Windows examples failed to start using mellanox card
> >
> > Hi Robert,
> >
> > Did you set the registry key (DevxFsRules) in the Windows registry?
> >
> > https://docs.nvidia.com/networking/display/winof2v290/Configuring+the+
> > Drive
> > r+Registry+Keys
> >
> > If not, can you try setting it to the value (0xFFFFFFFF) and see it
> > the issue still occurs after adapter restart?
> >
> > Thanks,
> >
> > Tal.
> >
> > > Subject: Re: Windows examples failed to start using mellanox card
> > >
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > Tal, Ophir, could you advise?
> > >
> > > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > > Hello,
> > > >
> > > > I am having trouble running DPDK on Windows. I am trying to use
> > > > the
> > > example programs and testpmd, but they fail with some errors (see
> > > outputs below). The testpmd program also does not go into
> > > interactive mode and exits after a keypress.
> > > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card
> > > > using
> > > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > > I am using the current build (DPDK Version 22.07-rc3).
> > > >
> > > > I followed to DPDK Windows guide, but currently I always get the
> > > > following
> > > error:
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > >
> > > > does anybody have an idea how to resolve this problem, or at least
> > > > get
> > > some more information why it failed?
> > > > Helloworld output:
> > > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > > EAL: Detected CPU lcores: 24
> > > > EAL: Detected NUMA nodes: 2
> > > > EAL: Multi-process support is requested, but not available.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > hello from core 1
> > > > hello from core 2
> > > > hello from core 3
> > > > hello from core 4
> > > > hello from core 5
> > > > hello from core 6
> > > > hello from core 7
> > > > hello from core 8
> > > > hello from core 16
> > > > hello from core 22
> > > > hello from core 11
> > > > hello from core 12
> > > > hello from core 13
> > > > hello from core 14
> > > > hello from core 15
> > > > hello from core 9
> > > > hello from core 17
> > > > hello from core 18
> > > > hello from core 19
> > > > hello from core 20
> > > > hello from core 21
> > > > hello from core 23
> > > > hello from core 0
> > > > hello from core 10
> > > >
> > > > testpmd output:
> > > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > > EAL: Detected CPU lcores: 24
> > > > EAL: Detected NUMA nodes: 2
> > > > EAL: Multi-process support is requested, but not available.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176,
> > > > socket=0
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176,
> > > > socket=1
> > > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring
> > > > Port
> > > > 0 (socket 0)
> > > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > > Invalid argument Configuring Port 1 (socket 0)
> > > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > > Invalid argument Please stop the ports first Done No commandline
> > > > core given, start packet forwarding Not all ports were started
> > > > Press enter to exit
> > > >
> > > >
> > > > Stopping port 0...
> > > > Stopping ports...
> > > > Done
> > > >
> > > > Stopping port 1...
> > > > Stopping ports...
> > > > Done
> > > >
> > > > Shutting down port 0...
> > > > Closing ports...
> > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > > Port
> > > > 0 is closed Done
> > > >
> > > > Shutting down port 1...
> > > > Closing ports...
> > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > > Port
> > > > 1 is closed Done
> > > >
> > > > Bye...
> > > >
> > > > Kind regards,
> > > > Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

* AW: Windows examples failed to start using mellanox card
  2022-07-12 14:35         ` Tal Shnaiderman
@ 2022-07-13  7:57           ` Robert Hable
  2022-07-26 14:14             ` Tal Shnaiderman
  0 siblings, 1 reply; 8+ messages in thread
From: Robert Hable @ 2022-07-13  7:57 UTC (permalink / raw)
  To: Tal Shnaiderman, Dmitry Kozlyuk, Ophir Munk
  Cc: users, Adham Masarwah, Idan Hackmon, Eilon Greenstein

Hello,

I am using dpdk on a regular machine without any VMs. Here are the device information from the device manager:
Driver Version : 2.90.25506.0
Firmware Version : 14.32.1010
Port Number : 2
Bus Type : PCI-E 5.0 GT/s x8
Link Speed : 1.0 Gbps/Full Duplex
Part Number : MCX4121A-XCAT
Serial Number : MT1802K30345
Device Id : 4117
Revision Id : 0
Current MAC Address : EC-0D-9A-D9-AA-43
Permanent MAC Address : EC-0D-9A-D9-AA-43
Network Status : Connected
Adapter Friendly Name : Ethernet 6
Port Type : ETH
IPv4 Address #1 : 172.21.2.114
IPv6 Address #1 : fe80::e14a:1ee5:54f6:b76d%2

Let me know if you need additional information.

Kind regards,
Robert

-----Ursprüngliche Nachricht-----
Von: Tal Shnaiderman <talshn@nvidia.com> 
Gesendet: Dienstag, 12. Juli 2022 16:36
An: Robert Hable <robert.hable@massresponse.com>; Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
Cc: users@dpdk.org; Adham Masarwah <adham@nvidia.com>; Idan Hackmon <idanhac@nvidia.com>; Eilon Greenstein <eilong@nvidia.com>
Betreff: RE: Windows examples failed to start using mellanox card

Hi Robert,

I'm glad to hear the issue is resolved, but I still want to understand why you get the error print as I'm not able to reproduce it on my side.

Are you using a Virtual Machine or is it a back-to-back setup? Can you also state the WINOF2 FW version you're using?

Thanks,

Tal.

> -----Original Message-----
> From: Robert Hable <robert.hable@massresponse.com>
> Sent: Tuesday, July 12, 2022 5:03 PM
> To: Tal Shnaiderman <talshn@nvidia.com>; Dmitry Kozlyuk 
> <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
> Cc: users@dpdk.org
> Subject: AW: Windows examples failed to start using mellanox card
> 
> External email: Use caution opening links or attachments
> 
> 
> Hello,
> 
> thank you for your help!
> Initially i did set DevxFsRules to 0. After setting it to 0xFFFFFF it now works.
> I still get the following error but testpmd seems to work for me now 
> as it is sending and receiving packets according to the task-manager.
>         mlx5_common: DevX read access NIC register=0X9055 failed 
> errno=0
> status=0 syndrome=0
> 
> So currently my issue seems to be solved.
> 
> Kind regards,
> Robert
> 
> -----Ursprüngliche Nachricht-----
> Von: Tal Shnaiderman <talshn@nvidia.com>
> Gesendet: Dienstag, 12. Juli 2022 12:24
> An: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Ophir Munk 
> <ophirmu@nvidia.com>; Robert Hable <robert.hable@massresponse.com>
> Cc: users@dpdk.org
> Betreff: RE: Windows examples failed to start using mellanox card
> 
> Small correction, please try with the value 0xFFFFFF.
> 
> > Subject: RE: Windows examples failed to start using mellanox card
> >
> > Hi Robert,
> >
> > Did you set the registry key (DevxFsRules) in the Windows registry?
> >
> > https://docs.nvidia.com/networking/display/winof2v290/Configuring+th
> > e+
> > Drive
> > r+Registry+Keys
> >
> > If not, can you try setting it to the value (0xFFFFFFFF) and see it 
> > the issue still occurs after adapter restart?
> >
> > Thanks,
> >
> > Tal.
> >
> > > Subject: Re: Windows examples failed to start using mellanox card
> > >
> > > External email: Use caution opening links or attachments
> > >
> > >
> > > Tal, Ophir, could you advise?
> > >
> > > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > > Hello,
> > > >
> > > > I am having trouble running DPDK on Windows. I am trying to use 
> > > > the
> > > example programs and testpmd, but they fail with some errors (see 
> > > outputs below). The testpmd program also does not go into 
> > > interactive mode and exits after a keypress.
> > > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX 
> > > > Card using
> > > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > > I am using the current build (DPDK Version 22.07-rc3).
> > > >
> > > > I followed to DPDK Windows guide, but currently I always get the 
> > > > following
> > > error:
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > >
> > > > does anybody have an idea how to resolve this problem, or at 
> > > > least get
> > > some more information why it failed?
> > > > Helloworld output:
> > > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > > EAL: Detected CPU lcores: 24
> > > > EAL: Detected NUMA nodes: 2
> > > > EAL: Multi-process support is requested, but not available.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > hello from core 1
> > > > hello from core 2
> > > > hello from core 3
> > > > hello from core 4
> > > > hello from core 5
> > > > hello from core 6
> > > > hello from core 7
> > > > hello from core 8
> > > > hello from core 16
> > > > hello from core 22
> > > > hello from core 11
> > > > hello from core 12
> > > > hello from core 13
> > > > hello from core 14
> > > > hello from core 15
> > > > hello from core 9
> > > > hello from core 17
> > > > hello from core 18
> > > > hello from core 19
> > > > hello from core 20
> > > > hello from core 21
> > > > hello from core 23
> > > > hello from core 0
> > > > hello from core 10
> > > >
> > > > testpmd output:
> > > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > > EAL: Detected CPU lcores: 24
> > > > EAL: Detected NUMA nodes: 2
> > > > EAL: Multi-process support is requested, but not available.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 
> > > > (socket 0)
> > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > status=0 syndrome=0
> > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456, 
> > > > size=2176,
> > > > socket=0
> > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456, 
> > > > size=2176,
> > > > socket=1
> > > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring 
> > > > Port
> > > > 0 (socket 0)
> > > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > > Invalid argument Configuring Port 1 (socket 0)
> > > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > > Invalid argument Please stop the ports first Done No commandline 
> > > > core given, start packet forwarding Not all ports were started 
> > > > Press enter to exit
> > > >
> > > >
> > > > Stopping port 0...
> > > > Stopping ports...
> > > > Done
> > > >
> > > > Stopping port 1...
> > > > Stopping ports...
> > > > Done
> > > >
> > > > Shutting down port 0...
> > > > Closing ports...
> > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported 
> > > > Port
> > > > 0 is closed Done
> > > >
> > > > Shutting down port 1...
> > > > Closing ports...
> > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported 
> > > > Port
> > > > 1 is closed Done
> > > >
> > > > Bye...
> > > >
> > > > Kind regards,
> > > > Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

* RE: Windows examples failed to start using mellanox card
  2022-07-13  7:57           ` AW: " Robert Hable
@ 2022-07-26 14:14             ` Tal Shnaiderman
  0 siblings, 0 replies; 8+ messages in thread
From: Tal Shnaiderman @ 2022-07-26 14:14 UTC (permalink / raw)
  To: Robert Hable, Dmitry Kozlyuk, Ophir Munk
  Cc: users, Adham Masarwah, Idan Hackmon, Eilon Greenstein

Hi Robert,

I investigated the reason you're getting the error message: it's a result of a PMD query unsupported by devices lower than ConnectX-6 however has no functional affect since the PMD behaves differently on those devices. 

I'll work on a patch lower the severity of this message as I understand it is confusing, for now you can continue working with the PMD and ignore this message.

Thanks for your help,

Tal.

> -----Original Message-----
> From: Robert Hable <Robert.Hable@spusu.at>
> Sent: Wednesday, July 13, 2022 10:58 AM
> To: Tal Shnaiderman <talshn@nvidia.com>; Dmitry Kozlyuk
> <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
> Cc: users@dpdk.org; Adham Masarwah <adham@nvidia.com>; Idan Hackmon
> <idanhac@nvidia.com>; Eilon Greenstein <eilong@nvidia.com>
> Subject: AW: Windows examples failed to start using mellanox card
> 
> External email: Use caution opening links or attachments
> 
> 
> Hello,
> 
> I am using dpdk on a regular machine without any VMs. Here are the device
> information from the device manager:
> Driver Version : 2.90.25506.0
> Firmware Version : 14.32.1010
> Port Number : 2
> Bus Type : PCI-E 5.0 GT/s x8
> Link Speed : 1.0 Gbps/Full Duplex
> Part Number : MCX4121A-XCAT
> Serial Number : MT1802K30345
> Device Id : 4117
> Revision Id : 0
> Current MAC Address : EC-0D-9A-D9-AA-43
> Permanent MAC Address : EC-0D-9A-D9-AA-43 Network Status : Connected
> Adapter Friendly Name : Ethernet 6 Port Type : ETH
> IPv4 Address #1 : 172.21.2.114
> IPv6 Address #1 : fe80::e14a:1ee5:54f6:b76d%2
> 
> Let me know if you need additional information.
> 
> Kind regards,
> Robert
> 
> -----Ursprüngliche Nachricht-----
> Von: Tal Shnaiderman <talshn@nvidia.com>
> Gesendet: Dienstag, 12. Juli 2022 16:36
> An: Robert Hable <robert.hable@massresponse.com>; Dmitry Kozlyuk
> <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
> Cc: users@dpdk.org; Adham Masarwah <adham@nvidia.com>; Idan Hackmon
> <idanhac@nvidia.com>; Eilon Greenstein <eilong@nvidia.com>
> Betreff: RE: Windows examples failed to start using mellanox card
> 
> Hi Robert,
> 
> I'm glad to hear the issue is resolved, but I still want to understand why you get
> the error print as I'm not able to reproduce it on my side.
> 
> Are you using a Virtual Machine or is it a back-to-back setup? Can you also state
> the WINOF2 FW version you're using?
> 
> Thanks,
> 
> Tal.
> 
> > -----Original Message-----
> > From: Robert Hable <robert.hable@massresponse.com>
> > Sent: Tuesday, July 12, 2022 5:03 PM
> > To: Tal Shnaiderman <talshn@nvidia.com>; Dmitry Kozlyuk
> > <dmitry.kozliuk@gmail.com>; Ophir Munk <ophirmu@nvidia.com>
> > Cc: users@dpdk.org
> > Subject: AW: Windows examples failed to start using mellanox card
> >
> > External email: Use caution opening links or attachments
> >
> >
> > Hello,
> >
> > thank you for your help!
> > Initially i did set DevxFsRules to 0. After setting it to 0xFFFFFF it now works.
> > I still get the following error but testpmd seems to work for me now
> > as it is sending and receiving packets according to the task-manager.
> >         mlx5_common: DevX read access NIC register=0X9055 failed
> > errno=0
> > status=0 syndrome=0
> >
> > So currently my issue seems to be solved.
> >
> > Kind regards,
> > Robert
> >
> > -----Ursprüngliche Nachricht-----
> > Von: Tal Shnaiderman <talshn@nvidia.com>
> > Gesendet: Dienstag, 12. Juli 2022 12:24
> > An: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>; Ophir Munk
> > <ophirmu@nvidia.com>; Robert Hable <robert.hable@massresponse.com>
> > Cc: users@dpdk.org
> > Betreff: RE: Windows examples failed to start using mellanox card
> >
> > Small correction, please try with the value 0xFFFFFF.
> >
> > > Subject: RE: Windows examples failed to start using mellanox card
> > >
> > > Hi Robert,
> > >
> > > Did you set the registry key (DevxFsRules) in the Windows registry?
> > >
> > > https://docs.nvidia.com/networking/display/winof2v290/Configuring+th
> > > e+
> > > Drive
> > > r+Registry+Keys
> > >
> > > If not, can you try setting it to the value (0xFFFFFFFF) and see it
> > > the issue still occurs after adapter restart?
> > >
> > > Thanks,
> > >
> > > Tal.
> > >
> > > > Subject: Re: Windows examples failed to start using mellanox card
> > > >
> > > > External email: Use caution opening links or attachments
> > > >
> > > >
> > > > Tal, Ophir, could you advise?
> > > >
> > > > 2022-07-11 14:10 (UTC+0000), Robert Hable:
> > > > > Hello,
> > > > >
> > > > > I am having trouble running DPDK on Windows. I am trying to use
> > > > > the
> > > > example programs and testpmd, but they fail with some errors (see
> > > > outputs below). The testpmd program also does not go into
> > > > interactive mode and exits after a keypress.
> > > > > I am using Windows Server 2022 with a Mellanox ConnectX-4 LX
> > > > > Card using
> > > > Win-OF 2 2.90.50010 / SDK 2.90.25518.
> > > > > I am using the current build (DPDK Version 22.07-rc3).
> > > > >
> > > > > I followed to DPDK Windows guide, but currently I always get the
> > > > > following
> > > > error:
> > > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > > status=0 syndrome=0
> > > > >
> > > > > does anybody have an idea how to resolve this problem, or at
> > > > > least get
> > > > some more information why it failed?
> > > > > Helloworld output:
> > > > > C:\dpdk\build\examples>dpdk-helloworld.exe
> > > > > EAL: Detected CPU lcores: 24
> > > > > EAL: Detected NUMA nodes: 2
> > > > > EAL: Multi-process support is requested, but not available.
> > > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > > > (socket 0)
> > > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > > status=0 syndrome=0
> > > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > > > (socket 0)
> > > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > > status=0 syndrome=0
> > > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > > hello from core 1
> > > > > hello from core 2
> > > > > hello from core 3
> > > > > hello from core 4
> > > > > hello from core 5
> > > > > hello from core 6
> > > > > hello from core 7
> > > > > hello from core 8
> > > > > hello from core 16
> > > > > hello from core 22
> > > > > hello from core 11
> > > > > hello from core 12
> > > > > hello from core 13
> > > > > hello from core 14
> > > > > hello from core 15
> > > > > hello from core 9
> > > > > hello from core 17
> > > > > hello from core 18
> > > > > hello from core 19
> > > > > hello from core 20
> > > > > hello from core 21
> > > > > hello from core 23
> > > > > hello from core 0
> > > > > hello from core 10
> > > > >
> > > > > testpmd output:
> > > > > C:\dpdk\build\app>dpdk-testpmd.exe
> > > > > EAL: Detected CPU lcores: 24
> > > > > EAL: Detected NUMA nodes: 2
> > > > > EAL: Multi-process support is requested, but not available.
> > > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0
> > > > > (socket 0)
> > > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > > status=0 syndrome=0
> > > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > > EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1
> > > > > (socket 0)
> > > > > mlx5_common: DevX read access NIC register=0X9055 failed errno=0
> > > > > status=0 syndrome=0
> > > > > mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> > > > > mlx5_net: Rx CQE 128B compression is not supported.
> > > > > testpmd: create a new mbuf pool <mb_pool_0>: n=331456,
> > > > > size=2176,
> > > > > socket=0
> > > > > testpmd: preferred mempool ops selected: ring_mp_mc
> > > > > testpmd: create a new mbuf pool <mb_pool_1>: n=331456,
> > > > > size=2176,
> > > > > socket=1
> > > > > testpmd: preferred mempool ops selected: ring_mp_mc Configuring
> > > > > Port
> > > > > 0 (socket 0)
> > > > > mlx5_net: port 0 failed to set defaults flows Fail to start port 0:
> > > > > Invalid argument Configuring Port 1 (socket 0)
> > > > > mlx5_net: port 1 failed to set defaults flows Fail to start port 1:
> > > > > Invalid argument Please stop the ports first Done No commandline
> > > > > core given, start packet forwarding Not all ports were started
> > > > > Press enter to exit
> > > > >
> > > > >
> > > > > Stopping port 0...
> > > > > Stopping ports...
> > > > > Done
> > > > >
> > > > > Stopping port 1...
> > > > > Stopping ports...
> > > > > Done
> > > > >
> > > > > Shutting down port 0...
> > > > > Closing ports...
> > > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > > > Port
> > > > > 0 is closed Done
> > > > >
> > > > > Shutting down port 1...
> > > > > Closing ports...
> > > > > mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> > > > > Port
> > > > > 1 is closed Done
> > > > >
> > > > > Bye...
> > > > >
> > > > > Kind regards,
> > > > > Robert


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-07-26 14:14 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-07-11 14:10 Windows examples failed to start using mellanox card Robert Hable
2022-07-12  8:27 ` Dmitry Kozlyuk
2022-07-12  8:43   ` Tal Shnaiderman
2022-07-12 10:23     ` Tal Shnaiderman
2022-07-12 14:03       ` AW: " Robert Hable
2022-07-12 14:35         ` Tal Shnaiderman
2022-07-13  7:57           ` AW: " Robert Hable
2022-07-26 14:14             ` Tal Shnaiderman

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).