DPDK usage discussions
 help / color / mirror / Atom feed
From: Olivier MATZ <olivier.matz@6wind.com>
To: Pavel Shirshov <pavel.shirshov@gmail.com>
Cc: Olivier Matz <olivier.matz@6wind.com>,
	vlad.lazarenko@worldquant.com, users@dpdk.org
Subject: Re: [dpdk-users] Optimal number of elements in mempool n = (2^q - 1) vs examples, what is the right thing to do?
Date: Tue, 7 Feb 2017 09:53:04 +0100	[thread overview]
Message-ID: <20170207095304.24e13480@glumotte.dev.6wind.com> (raw)
In-Reply-To: <CALkUq2iup+nHSysf5wdxYbvdPK58CFwJ1XxvaMQU-uKaGj63Ag@mail.gmail.com>

Hi Pavel,

On Mon, 6 Feb 2017 13:07:35 -0800, Pavel Shirshov
<pavel.shirshov@gmail.com> wrote:
> On Mon, Feb 6, 2017 at 5:51 AM, Olivier Matz <olivier.matz@6wind.com>
> wrote:
> > Hi Vlad,
> >
> > On Wed, 1 Feb 2017 16:54:57 +0000, Vlad.Lazarenko at worldquant.com
> > (Lazarenko, Vlad (WorldQuant)) wrote:  
> >> Hello,
> >>
> >> I'm new to DPDK and have noticed that documentation for
> >> rte_mempool_create states that the optimal size for a number of
> >> elements in the pool is n = (2^q-1). But in many examples it is
> >> simply set to 2^q (multi_process/simple_mp/main.c uses 2014, for
> >> example). This is a bit confusing. Is 2^q - 1 really the optimal
> >> number but examples don't use it, or maybe the documentation for
> >> the mempool is wrong, or...? If anyone could shed some light on
> >> this that'd be helpful.  
> >
> > That's true for rte_mempool based on a rte_ring (this is the
> > default, but since recently, it's possible to use another handler).
> >
> > The size of a rte_ring is (2^n - 1), because one element in the
> > ring is reserved to distinguish between a full an an empty ring.
> > So, when a mempool uses a ring, if we ask for 2^n elements, a ring
> > of size (2^(n+1) - 1) is created, which can consume additional
> > memory.
> >
> > On the other hand, the mempool object size is often much larger than
> > a ring entry (usually 8 bytes, the size of a pointer), especially
> > knowing that by default, the objects are cache aligned (usually 64
> > bytes).
> >
> > So we may remove this note in the future since it's not very
> > relevant.
>
> Hi Olivier,
> 
> It's a good explanation of rte_mempool internals. I think it would be
> good to have your comment in the rte_mempool documentation. Could we
> add it there?

My comment applies on mempool based on rings (default), but it becomes
wrong when using another handler (ex: stack, or upcoming hw handlers).
Moreover, the size of the pointer array in ring is often negligible in
comparison with the size of the objects array in mempool, making this
comment no so useful.

So I'll probably remove this part from API guide to the programmer's
guide.

Regards,
Olivier

  reply	other threads:[~2017-02-07  8:53 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-01 16:54 Lazarenko, Vlad (WorldQuant)
2017-02-06 13:51 ` Olivier Matz
2017-02-06 21:07   ` Pavel Shirshov
2017-02-07  8:53     ` Olivier MATZ [this message]
2017-02-07 15:55       ` Pavel Shirshov

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170207095304.24e13480@glumotte.dev.6wind.com \
    --to=olivier.matz@6wind.com \
    --cc=pavel.shirshov@gmail.com \
    --cc=users@dpdk.org \
    --cc=vlad.lazarenko@worldquant.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).