DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [Bug 983] [21.11] net/mlx5: increase in number of required hugepages
Date: Fri, 25 Mar 2022 15:56:32 +0000	[thread overview]
Message-ID: <bug-983-3@http.bugs.dpdk.org/> (raw)

https://bugs.dpdk.org/show_bug.cgi?id=983

            Bug ID: 983
           Summary: [21.11] net/mlx5: increase in number of required
                    hugepages
           Product: DPDK
           Version: 21.11
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: tudor.cornea@gmail.com
  Target Milestone: future

Greetings,


I'm developing a DPDK application, and recently I've found myself in the need
to upgrade from version 20.11 to version 21.11.

I noticed that when I use a ConnectX-6 NIC in PCI-Passthrough mode it seems to
require me to allocate a few extra hugepages. I have not seen this behavior
when I use the NIC in SR-IOV mode, or with other existing drivers. I've not
seen this in version 20.11.

I think I may have managed to reproduce the same behavior using dpdk-testpmd.
I am using 2 MB hugepages on my local setup.

DPDK 20.11 - Mlx5 Driver

NR_HUGEPAGES=200
mount -p /mnt/hugepages
mount -t hugetlbfs hugetlbfs /mnt/hugepages
sysctl vm.nr_hugepages="${NR_NUGEPAGES}"
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-4 --log-level=.*,8

DPDK 21.11 - Mlx5 Driver

NR_HUGEPAGES=220
mount -p /mnt/hugepages
mount -t hugetlbfs hugetlbfs /mnt/hugepages
sysctl vm.nr_hugepages="${NR_NUGEPAGES}"
./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-4 --log-level=.*,8

It seems that starting with DPDK 21.11, I have to allocate an extra 20
hugepages, otherwise the driver fails in allocating the mbuf pool.

testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
EAL: Error - exiting with code: 1
  Cause: Creation of mbuf pool for socket 0 failed: Cannot allocate memory

I am trying to keep the number of used hugepages to a minimum, and I was
curious if something changed in the new driver that could determine it to
require more hugepages when managing a PF. I've tried to look at the PMD guide
[1] and then at the code, but I haven't really found anything yet which could
help me explain the increase.

The NIC that I'm using is the following:
00:04.0 Ethernet controller: Mellanox Technologies MT28908 Family [ConnectX-6]

OS Distribution: Ubuntu 20.04

[1] https://doc.dpdk.org/guides-21.11/nics/mlx5.html?highlight=mlx5

-- 
You are receiving this mail because:
You are the assignee for the bug.

             reply	other threads:[~2022-03-25 15:56 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-03-25 15:56 bugzilla [this message]
2022-08-28 21:02 ` bugzilla

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-983-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).