DPDK usage discussions
 help / color / mirror / Atom feed
* Mbuf pool cache size in share-nothing applications
@ 2024-10-04 13:48 Igor Gutorov
  0 siblings, 0 replies; only message in thread
From: Igor Gutorov @ 2024-10-04 13:48 UTC (permalink / raw)
  To: users; +Cc: andrew.rybchenko, mb

Hi,

I'm a bit confused about certain semantics of `cache_size` in memory
pools. I'm working on a DPDK application where each Rx queue gets its
own mbuf mempool. The memory pools are never shared between lcores,
mbufs are never passed between lcores, and so the deallocation of an
mbuf will happen on the same lcore where it was allocated on (it is a
run-to-completion application). Is my understanding correct, that this
completely eliminates any lock contention, and so `cache_size` can
safely be set to 0?

Also, `rte_pktmbuf_pool_create()` internally calls
`rte_mempool_create()` with the default `flags`. Would there be a
performance benefit in creating mempools manually with the
`RTE_MEMPOOL_F_SP_PUT` and `RTE_MEMPOOL_F_SC_GET` flags set?

Thanks!

Sincerely,
Igor.

^ permalink raw reply	[flat|nested] only message in thread

only message in thread, other threads:[~2024-10-04 13:49 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-04 13:48 Mbuf pool cache size in share-nothing applications Igor Gutorov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).