* Max size for rte_mempool_create
@ 2022-02-09 21:20 Antonio Di Bacco
2022-02-09 21:44 ` Stephen Hemminger
0 siblings, 1 reply; 5+ messages in thread
From: Antonio Di Bacco @ 2022-02-09 21:20 UTC (permalink / raw)
To: users
[-- Attachment #1: Type: text/plain, Size: 343 bytes --]
I have a system with two numa sockets. Each numa socket has 8GB of RAM.
I reserve a total of 6 hugepages (1G).
When I try to create a mempool (API rte_mempool_create) of 530432 mbufs
(each one with 9108 bytes) I get a ENOMEM error.
In theory this mempool should be around 4.8GB and the hugepages are enough
to hold it.
Why is this failing ?
[-- Attachment #2: Type: text/html, Size: 631 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Max size for rte_mempool_create
2022-02-09 21:20 Max size for rte_mempool_create Antonio Di Bacco
@ 2022-02-09 21:44 ` Stephen Hemminger
2022-02-09 21:58 ` Antonio Di Bacco
0 siblings, 1 reply; 5+ messages in thread
From: Stephen Hemminger @ 2022-02-09 21:44 UTC (permalink / raw)
To: Antonio Di Bacco; +Cc: users
On Wed, 9 Feb 2022 22:20:34 +0100
Antonio Di Bacco <a.dibacco.ks@gmail.com> wrote:
> I have a system with two numa sockets. Each numa socket has 8GB of RAM.
> I reserve a total of 6 hugepages (1G).
>
> When I try to create a mempool (API rte_mempool_create) of 530432 mbufs
> (each one with 9108 bytes) I get a ENOMEM error.
>
> In theory this mempool should be around 4.8GB and the hugepages are enough
> to hold it.
> Why is this failing ?
This is likely becaus the hugepages have to be contiguous and
the kernel has to that many free pages (especially true with 1G pages).
Therefore it is recommended to
configure and reserve huge pages on kernel command line during boot.
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Max size for rte_mempool_create
2022-02-09 21:44 ` Stephen Hemminger
@ 2022-02-09 21:58 ` Antonio Di Bacco
2022-02-09 22:40 ` Stephen Hemminger
0 siblings, 1 reply; 5+ messages in thread
From: Antonio Di Bacco @ 2022-02-09 21:58 UTC (permalink / raw)
To: stephen; +Cc: users
[-- Attachment #1: Type: text/plain, Size: 931 bytes --]
Thanks! I already reserve huge pages from kernel command line . I reserve 6
1G hugepages. Is there any other reason for the ENOMEM?
On Wed, 9 Feb 2022 at 22:44, Stephen Hemminger <stephen@networkplumber.org>
wrote:
> On Wed, 9 Feb 2022 22:20:34 +0100
> Antonio Di Bacco <a.dibacco.ks@gmail.com> wrote:
>
> > I have a system with two numa sockets. Each numa socket has 8GB of RAM.
> > I reserve a total of 6 hugepages (1G).
> >
> > When I try to create a mempool (API rte_mempool_create) of 530432 mbufs
> > (each one with 9108 bytes) I get a ENOMEM error.
> >
> > In theory this mempool should be around 4.8GB and the hugepages are
> enough
> > to hold it.
> > Why is this failing ?
>
> This is likely becaus the hugepages have to be contiguous and
> the kernel has to that many free pages (especially true with 1G pages).
> Therefore it is recommended to
> configure and reserve huge pages on kernel command line during boot.
>
[-- Attachment #2: Type: text/html, Size: 1419 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Max size for rte_mempool_create
2022-02-09 21:58 ` Antonio Di Bacco
@ 2022-02-09 22:40 ` Stephen Hemminger
2022-02-10 7:24 ` Gabor LENCSE
0 siblings, 1 reply; 5+ messages in thread
From: Stephen Hemminger @ 2022-02-09 22:40 UTC (permalink / raw)
To: Antonio Di Bacco; +Cc: users
On Wed, 9 Feb 2022 22:58:43 +0100
Antonio Di Bacco <a.dibacco.ks@gmail.com> wrote:
> Thanks! I already reserve huge pages from kernel command line . I reserve 6
> 1G hugepages. Is there any other reason for the ENOMEM?
>
> On Wed, 9 Feb 2022 at 22:44, Stephen Hemminger <stephen@networkplumber.org>
> wrote:
>
> > On Wed, 9 Feb 2022 22:20:34 +0100
> > Antonio Di Bacco <a.dibacco.ks@gmail.com> wrote:
> >
> > > I have a system with two numa sockets. Each numa socket has 8GB of RAM.
> > > I reserve a total of 6 hugepages (1G).
> > >
> > > When I try to create a mempool (API rte_mempool_create) of 530432 mbufs
> > > (each one with 9108 bytes) I get a ENOMEM error.
> > >
> > > In theory this mempool should be around 4.8GB and the hugepages are
> > enough
> > > to hold it.
> > > Why is this failing ?
> >
> > This is likely becaus the hugepages have to be contiguous and
> > the kernel has to that many free pages (especially true with 1G pages).
> > Therefore it is recommended to
> > configure and reserve huge pages on kernel command line during boot.
> >
Your calculations look about right:
elementsize = sizeof(struct rte_mbuf)
+ private_size
+ RTE_PKTMBUF_HEADROOM
+ mbuf_size;
object = rte_mempool_calc_objsize(elementsize, 0, NULL);
With mbuf_size of 9108 and typical DPDK defaults:
elementsize = 128 + 128 + 9108 = 9364
mempool rounds 9364 up to cacheline (64) = 9408
mempool object header = 192
objectsize = 9408 + 192 = 9600 bytes per object
Total size of mempool requested = 530432 * 9600 = 4.74G
If this a Numa machine you may need to make sure that the kernel
has decided to put the hugepages on the right socket.
Perhaps it decided to split them across sockets?
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: Max size for rte_mempool_create
2022-02-09 22:40 ` Stephen Hemminger
@ 2022-02-10 7:24 ` Gabor LENCSE
0 siblings, 0 replies; 5+ messages in thread
From: Gabor LENCSE @ 2022-02-10 7:24 UTC (permalink / raw)
To: users
Dear All,
>>>> I have a system with two numa sockets. Each numa socket has 8GB of RAM.
>>>> I reserve a total of 6 hugepages (1G).
Unless the system was specially crafted, it is very likely that 4GB
belongs to each NUMA node, thus it is not possible for the kernel to put
all 6 hugepages into the memory of the same NUMA node.
Perhaps adding another 8GB RAM and reserving 12 hugepages could help.
Gábor
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2022-02-10 7:24 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-09 21:20 Max size for rte_mempool_create Antonio Di Bacco
2022-02-09 21:44 ` Stephen Hemminger
2022-02-09 21:58 ` Antonio Di Bacco
2022-02-09 22:40 ` Stephen Hemminger
2022-02-10 7:24 ` Gabor LENCSE
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).