* [Bug 983] [21.11] net/mlx5: increase in number of required hugepages
@ 2022-03-25 15:56 bugzilla
2022-08-28 21:02 ` bugzilla
0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2022-03-25 15:56 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=983
Bug ID: 983
Summary: [21.11] net/mlx5: increase in number of required
hugepages
Product: DPDK
Version: 21.11
Hardware: x86
OS: Linux
Status: UNCONFIRMED
Severity: normal
Priority: Normal
Component: ethdev
Assignee: dev@dpdk.org
Reporter: tudor.cornea@gmail.com
Target Milestone: future
Greetings,
I'm developing a DPDK application, and recently I've found myself in the need
to upgrade from version 20.11 to version 21.11.
I noticed that when I use a ConnectX-6 NIC in PCI-Passthrough mode it seems to
require me to allocate a few extra hugepages. I have not seen this behavior
when I use the NIC in SR-IOV mode, or with other existing drivers. I've not
seen this in version 20.11.
I think I may have managed to reproduce the same behavior using dpdk-testpmd.
I am using 2 MB hugepages on my local setup.
DPDK 20.11 - Mlx5 Driver
NR_HUGEPAGES=200
mount -p /mnt/hugepages
mount -t hugetlbfs hugetlbfs /mnt/hugepages
sysctl vm.nr_hugepages="${NR_NUGEPAGES}"
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-4 --log-level=.*,8
DPDK 21.11 - Mlx5 Driver
NR_HUGEPAGES=220
mount -p /mnt/hugepages
mount -t hugetlbfs hugetlbfs /mnt/hugepages
sysctl vm.nr_hugepages="${NR_NUGEPAGES}"
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-4 --log-level=.*,8
It seems that starting with DPDK 21.11, I have to allocate an extra 20
hugepages, otherwise the driver fails in allocating the mbuf pool.
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory
I am trying to keep the number of used hugepages to a minimum, and I was
curious if something changed in the new driver that could determine it to
require more hugepages when managing a PF. I've tried to look at the PMD guide
[1] and then at the code, but I haven't really found anything yet which could
help me explain the increase.
The NIC that I'm using is the following:
00:04.0 Ethernet controller: Mellanox Technologies MT28908 Family [ConnectX-6]
OS Distribution: Ubuntu 20.04
[1] https://doc.dpdk.org/guides-21.11/nics/mlx5.html?highlight=mlx5
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] 2+ messages in thread
* [Bug 983] [21.11] net/mlx5: increase in number of required hugepages
2022-03-25 15:56 [Bug 983] [21.11] net/mlx5: increase in number of required hugepages bugzilla
@ 2022-08-28 21:02 ` bugzilla
0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2022-08-28 21:02 UTC (permalink / raw)
To: dev
https://bugs.dpdk.org/show_bug.cgi?id=983
Asaf Penso (asafp@nvidia.com) changed:
What |Removed |Added
----------------------------------------------------------------------------
Status|UNCONFIRMED |RESOLVED
CC| |asafp@nvidia.com
Resolution|--- |WONTFIX
--- Comment #3 from Asaf Penso (asafp@nvidia.com) ---
Nothing to fix.
--
You are receiving this mail because:
You are the assignee for the bug.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2022-08-28 21:02 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-25 15:56 [Bug 983] [21.11] net/mlx5: increase in number of required hugepages bugzilla
2022-08-28 21:02 ` bugzilla
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).