DPDK usage discussions
 help / color / mirror / Atom feed
From: Igor Gutorov <igootorov@gmail.com>
To: Patrick Mahan <mahan@mahan.org>
Cc: users@dpdk.org
Subject: Re: Calculating number of HugeTLBs required
Date: Tue, 11 Mar 2025 15:59:40 +0300	[thread overview]
Message-ID: <CAL7bPf3agr7dF=2ukJ6rGhbEcm_HbLsto69czyihCw-aw-ohwQ@mail.gmail.com> (raw)
In-Reply-To: <60c59649-a393-4d45-81e2-faad68b8a506@mahan.org>

On Tue, Mar 11, 2025 at 9:01 AM Patrick Mahan <mahan@mahan.org> wrote:
>
> On 3/10/25 2:21 PM, Stephen Hemminger wrote:
> > On Tue, 4 Mar 2025 11:22:50 -0800
> > Patrick Mahan <mahan@mahan.org> wrote:
> >
> >> Morning,
> >>
> >> This might be simple question, but it has my curiosity itching.
> >>
> >> I did a quick scan through the documentation, but I did not see any good
> >> guidelines for determining the number of HugeTLB based on the number of PMDs and
> >> number of RX/TX queues.
> >>
> >> I'm am looking at three different platforms, one has 2 ports (ixgbe), one has 3
> >> ports (1 e1000 and 2 i40es) and a third has 2 ports (1 e1000 and 1 Cavium liquidIO).
> >>
> >> I'm trying to come up with some means of defining the HugeTLB requirements other
> >> than trial and error.
> >>
> >> Thanks,
> >>
> >> Patrick
> >
> > There is on exact way to estimate this. But for most applications the largest memory footprint
> > is the mbuf pool. For sizing the mbuf pool you need to account for all the NIC's, queues, and descriptor arrays
> > as well as any internal staging buffers.
>
> That's what I thought, but I was hoping for some group wisdom here.  I am trying
> to construct the various startup (two are systemd and one is still sysvinit) to
> try and calculate this based on, as you pointed out, the # of NICS, the # of
> queues, etc.  The code was already using HugeTLBs for other stuff (RiB/FiB,
> database) that I had written code to calculate that information based on the # of
> entries in the database, the maximum # of routes, etc at boot time to reserve.
> The move to DPDK adds more complexity to this as we are looking at leveraging
> more CPU cores, which may mean more queues, which means more packets, etc.
>
> Right now I've done some, back of the envelope, calculations, but that is not a
> way to dynamically approach this.
>
> Anyways, thanks for responding...
>
> Patick

Hello Patrick,

One very crude way I use is to allocate "more than enough" pages. Say,
32 1G pages. Then, check
`/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages`,
it should say 32 (note `node0` in the path, adjust accordingly). Start
your application and check `free_hugepages` again. The difference is
how many hugepages your application uses. I then adjust the total
number of allocated pages to remove unused ones.

As you've said, this is somewhat a trial and error approach, but
practically requires only 1 application startup/shutdown, and could
probably be automated.

--
Regards,
Igor

      reply	other threads:[~2025-03-11 13:00 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-03-04 19:22 Patrick Mahan
2025-03-10 21:21 ` Stephen Hemminger
2025-03-11  6:01   ` Patrick Mahan
2025-03-11 12:59     ` Igor Gutorov [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAL7bPf3agr7dF=2ukJ6rGhbEcm_HbLsto69czyihCw-aw-ohwQ@mail.gmail.com' \
    --to=igootorov@gmail.com \
    --cc=mahan@mahan.org \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).