DPDK usage discussions
 help / color / mirror / Atom feed
From: Stephen Hemminger <stephen@networkplumber.org>
To: Antonio Di Bacco <a.dibacco.ks@gmail.com>
Cc: users@dpdk.org
Subject: Re: Max size for rte_mempool_create
Date: Wed, 9 Feb 2022 14:40:16 -0800	[thread overview]
Message-ID: <20220209144016.19b75a9f@hermes.local> (raw)
In-Reply-To: <CAO8pfFnS0sOA4cmQ-CR1Gpc+QM3Fuh6L0ZEMm+d2RZ4BEd3SgA@mail.gmail.com>

On Wed, 9 Feb 2022 22:58:43 +0100
Antonio Di Bacco <a.dibacco.ks@gmail.com> wrote:

> Thanks! I already reserve huge pages from kernel command line . I reserve 6
> 1G hugepages. Is there any other reason for the ENOMEM?
> 
> On Wed, 9 Feb 2022 at 22:44, Stephen Hemminger <stephen@networkplumber.org>
> wrote:
> 
> > On Wed, 9 Feb 2022 22:20:34 +0100
> > Antonio Di Bacco <a.dibacco.ks@gmail.com> wrote:
> >  
> > > I have a system with two numa sockets. Each numa socket has 8GB of RAM.
> > > I reserve a total of 6 hugepages (1G).
> > >
> > > When I try to create a mempool (API rte_mempool_create) of 530432 mbufs
> > > (each one with 9108 bytes) I get a ENOMEM error.
> > >
> > > In theory this mempool should be around 4.8GB and the hugepages are  
> > enough  
> > > to hold it.
> > > Why is this failing ?  
> >
> > This is likely becaus the hugepages have to be contiguous and
> > the kernel has to that many free pages (especially true with 1G pages).
> > Therefore it is recommended to
> > configure and reserve huge pages on kernel command line during boot.
> >  

Your calculations look about right:
  elementsize  = sizeof(struct rte_mbuf) 
                + private_size
                + RTE_PKTMBUF_HEADROOM
                + mbuf_size;
  object = rte_mempool_calc_objsize(elementsize, 0, NULL);


With mbuf_size of 9108 and typical DPDK defaults:
  elementsize = 128 + 128 + 9108 = 9364
       mempool rounds 9364 up to cacheline (64) = 9408
       mempool object header = 192
  objectsize = 9408 + 192 = 9600 bytes per object

Total size of mempool requested = 530432 * 9600 = 4.74G

If this a Numa machine you may need to make sure that the kernel
has decided to put the hugepages on the right socket.
Perhaps it decided to split them across sockets?


 

  reply	other threads:[~2022-02-09 22:40 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-02-09 21:20 Antonio Di Bacco
2022-02-09 21:44 ` Stephen Hemminger
2022-02-09 21:58   ` Antonio Di Bacco
2022-02-09 22:40     ` Stephen Hemminger [this message]
2022-02-10  7:24       ` Gabor LENCSE

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220209144016.19b75a9f@hermes.local \
    --to=stephen@networkplumber.org \
    --cc=a.dibacco.ks@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).