DPDK usage discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: Tal Shnaiderman <talshn@nvidia.com>, Ophir Munk <ophirmu@nvidia.com>
Cc: Robert Hable <robert.hable@massresponse.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: Windows examples failed to start using mellanox card
Date: Tue, 12 Jul 2022 11:27:21 +0300	[thread overview]
Message-ID: <20220712112721.19292750@sovereign> (raw)
In-Reply-To: <PAXP190MB1741191CBDBD63B50E0BB435E1879@PAXP190MB1741.EURP190.PROD.OUTLOOK.COM>

Tal, Ophir, could you advise?

2022-07-11 14:10 (UTC+0000), Robert Hable:
> Hello,
> 
> I am having trouble running DPDK on Windows. I am trying to use the example programs and testpmd, but they fail with some errors (see outputs below). The testpmd program also does not go into interactive mode and exits after a keypress.
> I am using Windows Server 2022 with a Mellanox ConnectX-4 LX Card using Win-OF 2 2.90.50010 / SDK 2.90.25518.
> I am using the current build (DPDK Version 22.07-rc3).
> 
> I followed to DPDK Windows guide, but currently I always get the following error:
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> 
> does anybody have an idea how to resolve this problem, or at least get some more information why it failed?
> Helloworld output:
> C:\dpdk\build\examples>dpdk-helloworld.exe  
> EAL: Detected CPU lcores: 24
> EAL: Detected NUMA nodes: 2
> EAL: Multi-process support is requested, but not available.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> hello from core 1
> hello from core 2
> hello from core 3
> hello from core 4
> hello from core 5
> hello from core 6
> hello from core 7
> hello from core 8
> hello from core 16
> hello from core 22
> hello from core 11
> hello from core 12
> hello from core 13
> hello from core 14
> hello from core 15
> hello from core 9
> hello from core 17
> hello from core 18
> hello from core 19
> hello from core 20
> hello from core 21
> hello from core 23
> hello from core 0
> hello from core 10
> 
> testpmd output:
> C:\dpdk\build\app>dpdk-testpmd.exe  
> EAL: Detected CPU lcores: 24
> EAL: Detected NUMA nodes: 2
> EAL: Multi-process support is requested, but not available.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.0 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> EAL: Probe PCI driver: mlx5_pci (15b3:1015) device: 0000:06:00.1 (socket 0)
> mlx5_common: DevX read access NIC register=0X9055 failed errno=0 status=0 syndrome=0
> mlx5_net: mlx5_os_dev_shared_handler_install: is not supported
> mlx5_net: Rx CQE 128B compression is not supported.
> testpmd: create a new mbuf pool <mb_pool_0>: n=331456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mb_pool_1>: n=331456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
> Configuring Port 0 (socket 0)
> mlx5_net: port 0 failed to set defaults flows
> Fail to start port 0: Invalid argument
> Configuring Port 1 (socket 0)
> mlx5_net: port 1 failed to set defaults flows
> Fail to start port 1: Invalid argument
> Please stop the ports first
> Done
> No commandline core given, start packet forwarding
> Not all ports were started
> Press enter to exit
> 
> 
> Stopping port 0...
> Stopping ports...
> Done
> 
> Stopping port 1...
> Stopping ports...
> Done
> 
> Shutting down port 0...
> Closing ports...
> mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> Port 0 is closed
> Done
> 
> Shutting down port 1...
> Closing ports...
> mlx5_net: mlx5_os_dev_shared_handler_uninstall: is not supported
> Port 1 is closed
> Done
> 
> Bye...
> 
> Kind regards,
> Robert


  reply	other threads:[~2022-07-12  8:27 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-11 14:10 Robert Hable
2022-07-12  8:27 ` Dmitry Kozlyuk [this message]
2022-07-12  8:43   ` Tal Shnaiderman
2022-07-12 10:23     ` Tal Shnaiderman
2022-07-12 14:03       ` AW: " Robert Hable
2022-07-12 14:35         ` Tal Shnaiderman
2022-07-13  7:57           ` AW: " Robert Hable
2022-07-26 14:14             ` Tal Shnaiderman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220712112721.19292750@sovereign \
    --to=dmitry.kozliuk@gmail.com \
    --cc=ophirmu@nvidia.com \
    --cc=robert.hable@massresponse.com \
    --cc=talshn@nvidia.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).