DPDK usage discussions
 help / color / mirror / Atom feed
From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com>
To: fwefew 4t4tg <7532yahoo@gmail.com>
Cc: users@dpdk.org
Subject: Re: allocating a mempool w/ rte_pktmbuf_pool_create()
Date: Sun, 30 Jan 2022 14:32:47 +0300	[thread overview]
Message-ID: <20220130143247.19aaeba8@sovereign> (raw)
In-Reply-To: <CA+Tq66VS-ehGHySBpzT3ZpX-7Svi5nFDcCpMPyZzyXq44W=bQQ@mail.gmail.com>

Hi,

2022-01-29 21:33 (UTC-0500), fwefew 4t4tg:
[...]
> > The other crucial insight is: so long as memory is allocated on the same
> > NUMA node as the RXQ/TXQ runs that ultimately uses it, there is only marginal
> > performance advantage to having per-core caching of mbufs in a mempool
> > as provided by the private_data_size formal argument in rte_mempool_create() here:
> >
> > https://doc.dpdk.org/api/rte__mempool_8h.html#a503f2f889043a48ca9995878846db2fd
> >
> > In fact the API doc should really point out the advantage; perhaps it
> > eliminates some cache sloshing to get the last few percent of performance.

Note: "cache sloshing", aka "false sharing", is not the case here.
There is a true, not false, concurrency for the mempool ring
in case multiple lcores use one mempool (see below why you may want this).
A colloquial term is "contention", per-lcore caching reduces it.

Later you are talking about the case when a mempool is created for each queue.
The potential issue with this approach is that one queue may quickly deplete
its mempool; say, if it does IPv4 reassembly and holds fragments for long.
To counter this, each queue mempool must be large, which is a memory waste.
This is why often one mempool is created for a set of queues
(processed on lcores from a single NUMA node at least).
If one queue consumes more mbufs then the others, it is not a problem anymore
as long as the mempool as a whole is not depleted.
Per-lcore caching is optimizing this case when many lcores access one mempool.
It may be less relevant for your case.
You can run "mempool_perf_autotest" command of app/test/dpdk-test binary
to see how the cache influences performance.
See also:
https://doc.dpdk.org/guides/prog_guide/mempool_lib.html#mempool-local-cache

[...]
> > Let's turn then to a larger issue: what happens if different RXQ/TXQs have
> > radically different needs?
> >
> > As the code above illustrates, one merely allocates a size appropriate to
> > an individual RXQ/TXQ by changing the count and size of mbufs ----
> > which is as simple as it can get.

Correct.
As explained above, it can be also one mempool per queue group.
What do you think is missing here for your use case?

      reply	other threads:[~2022-01-30 11:32 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-29 23:46 fwefew 4t4tg
2022-01-30  1:23 ` Dmitry Kozlyuk
2022-01-30  2:29   ` fwefew 4t4tg
2022-01-30  2:33     ` fwefew 4t4tg
2022-01-30 11:32       ` Dmitry Kozlyuk [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220130143247.19aaeba8@sovereign \
    --to=dmitry.kozliuk@gmail.com \
    --cc=7532yahoo@gmail.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).