DPDK usage discussions
 help / color / mirror / Atom feed
* Calculating number of HugeTLBs required
@ 2025-03-04 19:22 Patrick Mahan
  2025-03-10 21:21 ` Stephen Hemminger
  0 siblings, 1 reply; 4+ messages in thread
From: Patrick Mahan @ 2025-03-04 19:22 UTC (permalink / raw)
  To: users

Morning,

This might be simple question, but it has my curiosity itching.

I did a quick scan through the documentation, but I did not see any good 
guidelines for determining the number of HugeTLB based on the number of PMDs and 
number of RX/TX queues.

I'm am looking at three different platforms, one has 2 ports (ixgbe), one has 3 
ports (1 e1000 and 2 i40es) and a third has 2 ports (1 e1000 and 1 Cavium liquidIO).

I'm trying to come up with some means of defining the HugeTLB requirements other 
than trial and error.

Thanks,

Patrick

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Calculating number of HugeTLBs required
  2025-03-04 19:22 Calculating number of HugeTLBs required Patrick Mahan
@ 2025-03-10 21:21 ` Stephen Hemminger
  2025-03-11  6:01   ` Patrick Mahan
  0 siblings, 1 reply; 4+ messages in thread
From: Stephen Hemminger @ 2025-03-10 21:21 UTC (permalink / raw)
  To: Patrick Mahan; +Cc: users

On Tue, 4 Mar 2025 11:22:50 -0800
Patrick Mahan <mahan@mahan.org> wrote:

> Morning,
> 
> This might be simple question, but it has my curiosity itching.
> 
> I did a quick scan through the documentation, but I did not see any good 
> guidelines for determining the number of HugeTLB based on the number of PMDs and 
> number of RX/TX queues.
> 
> I'm am looking at three different platforms, one has 2 ports (ixgbe), one has 3 
> ports (1 e1000 and 2 i40es) and a third has 2 ports (1 e1000 and 1 Cavium liquidIO).
> 
> I'm trying to come up with some means of defining the HugeTLB requirements other 
> than trial and error.
> 
> Thanks,
> 
> Patrick

There is on exact way to estimate this. But for most applications the largest memory footprint
is the mbuf pool. For sizing the mbuf pool you need to account for all the NIC's, queues, and descriptor arrays
as well as any internal staging buffers.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Calculating number of HugeTLBs required
  2025-03-10 21:21 ` Stephen Hemminger
@ 2025-03-11  6:01   ` Patrick Mahan
  2025-03-11 12:59     ` Igor Gutorov
  0 siblings, 1 reply; 4+ messages in thread
From: Patrick Mahan @ 2025-03-11  6:01 UTC (permalink / raw)
  To: Stephen Hemminger; +Cc: users

On 3/10/25 2:21 PM, Stephen Hemminger wrote:
> On Tue, 4 Mar 2025 11:22:50 -0800
> Patrick Mahan <mahan@mahan.org> wrote:
> 
>> Morning,
>>
>> This might be simple question, but it has my curiosity itching.
>>
>> I did a quick scan through the documentation, but I did not see any good
>> guidelines for determining the number of HugeTLB based on the number of PMDs and
>> number of RX/TX queues.
>>
>> I'm am looking at three different platforms, one has 2 ports (ixgbe), one has 3
>> ports (1 e1000 and 2 i40es) and a third has 2 ports (1 e1000 and 1 Cavium liquidIO).
>>
>> I'm trying to come up with some means of defining the HugeTLB requirements other
>> than trial and error.
>>
>> Thanks,
>>
>> Patrick
> 
> There is on exact way to estimate this. But for most applications the largest memory footprint
> is the mbuf pool. For sizing the mbuf pool you need to account for all the NIC's, queues, and descriptor arrays
> as well as any internal staging buffers.

That's what I thought, but I was hoping for some group wisdom here.  I am trying 
to construct the various startup (two are systemd and one is still sysvinit) to 
try and calculate this based on, as you pointed out, the # of NICS, the # of 
queues, etc.  The code was already using HugeTLBs for other stuff (RiB/FiB, 
database) that I had written code to calculate that information based on the # of 
entries in the database, the maximum # of routes, etc at boot time to reserve. 
The move to DPDK adds more complexity to this as we are looking at leveraging 
more CPU cores, which may mean more queues, which means more packets, etc.

Right now I've done some, back of the envelope, calculations, but that is not a 
way to dynamically approach this.

Anyways, thanks for responding...

Patick

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: Calculating number of HugeTLBs required
  2025-03-11  6:01   ` Patrick Mahan
@ 2025-03-11 12:59     ` Igor Gutorov
  0 siblings, 0 replies; 4+ messages in thread
From: Igor Gutorov @ 2025-03-11 12:59 UTC (permalink / raw)
  To: Patrick Mahan; +Cc: users

On Tue, Mar 11, 2025 at 9:01 AM Patrick Mahan <mahan@mahan.org> wrote:
>
> On 3/10/25 2:21 PM, Stephen Hemminger wrote:
> > On Tue, 4 Mar 2025 11:22:50 -0800
> > Patrick Mahan <mahan@mahan.org> wrote:
> >
> >> Morning,
> >>
> >> This might be simple question, but it has my curiosity itching.
> >>
> >> I did a quick scan through the documentation, but I did not see any good
> >> guidelines for determining the number of HugeTLB based on the number of PMDs and
> >> number of RX/TX queues.
> >>
> >> I'm am looking at three different platforms, one has 2 ports (ixgbe), one has 3
> >> ports (1 e1000 and 2 i40es) and a third has 2 ports (1 e1000 and 1 Cavium liquidIO).
> >>
> >> I'm trying to come up with some means of defining the HugeTLB requirements other
> >> than trial and error.
> >>
> >> Thanks,
> >>
> >> Patrick
> >
> > There is on exact way to estimate this. But for most applications the largest memory footprint
> > is the mbuf pool. For sizing the mbuf pool you need to account for all the NIC's, queues, and descriptor arrays
> > as well as any internal staging buffers.
>
> That's what I thought, but I was hoping for some group wisdom here.  I am trying
> to construct the various startup (two are systemd and one is still sysvinit) to
> try and calculate this based on, as you pointed out, the # of NICS, the # of
> queues, etc.  The code was already using HugeTLBs for other stuff (RiB/FiB,
> database) that I had written code to calculate that information based on the # of
> entries in the database, the maximum # of routes, etc at boot time to reserve.
> The move to DPDK adds more complexity to this as we are looking at leveraging
> more CPU cores, which may mean more queues, which means more packets, etc.
>
> Right now I've done some, back of the envelope, calculations, but that is not a
> way to dynamically approach this.
>
> Anyways, thanks for responding...
>
> Patick

Hello Patrick,

One very crude way I use is to allocate "more than enough" pages. Say,
32 1G pages. Then, check
`/sys/devices/system/node/node0/hugepages/hugepages-1048576kB/free_hugepages`,
it should say 32 (note `node0` in the path, adjust accordingly). Start
your application and check `free_hugepages` again. The difference is
how many hugepages your application uses. I then adjust the total
number of allocated pages to remove unused ones.

As you've said, this is somewhat a trial and error approach, but
practically requires only 1 application startup/shutdown, and could
probably be automated.

--
Regards,
Igor

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2025-03-11 13:00 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2025-03-04 19:22 Calculating number of HugeTLBs required Patrick Mahan
2025-03-10 21:21 ` Stephen Hemminger
2025-03-11  6:01   ` Patrick Mahan
2025-03-11 12:59     ` Igor Gutorov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).