DPDK usage discussions
 help / color / mirror / Atom feed
From: Asaf Penso <asafp@nvidia.com>
To: Alberto Perro <alberto.perro@cern.ch>, "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] MLX5 configuration and installation
Date: Thu, 27 May 2021 07:06:24 +0000
Message-ID: <DM8PR12MB54942EEB2C41832AB76EC0FACD239@DM8PR12MB5494.namprd12.prod.outlook.com> (raw)
In-Reply-To: <C879292E16A76B44BB25DA2F62F5F51C4A6F6A6B@CERNXCHG51.cern.ch>

Hello Alberto,

Is this a specific issue with the helloworld?
Does testpmd work ok for you?

Regards,
Asaf Penso

>-----Original Message-----
>From: users <users-bounces@dpdk.org> On Behalf Of Alberto Perro
>Sent: Thursday, May 20, 2021 12:53 PM
>To: users@dpdk.org
>Subject: [dpdk-users] MLX5 configuration and installation
>
>Good morning,
>
>I want to evaluate DPDK on my servers, which are equipped with 4 Mellanox
>ConnectX-5 Ex 2x100G cards.
>I have installed MLNX_OFED 5.3-1.0.0 from nvidia, compiled from source with
>`--upstream-libs --dpdk` flags.
>I downloaded DPDK 20.11 LTS and compiled following the quick start guide.
>I have allocated 1024 2MB hugepages for each NUMA node.
>When I try to run dpdk-helloworld I get:
>
>```
>[aperro@ebstortest02 examples]$ sudo ./dpdk-helloworld
>EAL: Detected 64 lcore(s)
>EAL: Detected 2 NUMA nodes
>EAL: Detected static linkage of DPDK
>EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>EAL: Selected IOVA mode 'PA'
>EAL: No available hugepages reported in hugepages-1048576kB
>EAL: Probing VFIO support...
>EAL: VFIO support initialized
>EAL: DPDK is running on a NUMA system, but is compiled without NUMA
>support.
>EAL: This will have adverse consequences for performance and usability.
>EAL: Please use --legacy-mem option, or recompile with NUMA support.
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:81:00.0 (socket 1)
>mlx5_pci: probe of PCI device 0000:81:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:81:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:81:00.1 (socket 1)
>mlx5_pci: probe of PCI device 0000:81:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:81:00.1 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:a1:00.0 (socket 1)
>mlx5_pci: probe of PCI device 0000:a1:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:a1:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:a1:00.1 (socket 1)
>mlx5_pci: probe of PCI device 0000:a1:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:a1:00.1 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:c1:00.0 (socket 1)
>mlx5_pci: probe of PCI device 0000:c1:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:c1:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:c1:00.1 (socket 1)
>mlx5_pci: probe of PCI device 0000:c1:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:c1:00.1 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:c2:00.0 (socket 1)
>mlx5_pci: probe of PCI device 0000:c2:00.0 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:c2:00.0 cannot be used
>EAL: Probe PCI driver: mlx5_pci (15b3:1019) device: 0000:c2:00.1 (socket 1)
>mlx5_pci: probe of PCI device 0000:c2:00.1 aborted after encountering an
>error: Cannot allocate memory
>common_mlx5: Failed to load driver = mlx5_pci.
>
>EAL: Requested device 0000:c2:00.1 cannot be used
>EAL: No legacy callbacks, legacy socket not created hello from core 1 ...
>```

      reply	other threads:[~2021-05-27  7:06 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-20  9:52 Alberto Perro
2021-05-27  7:06 ` Asaf Penso [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM8PR12MB54942EEB2C41832AB76EC0FACD239@DM8PR12MB5494.namprd12.prod.outlook.com \
    --to=asafp@nvidia.com \
    --cc=alberto.perro@cern.ch \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK usage discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ https://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git