* Mbuf pool cache size in share-nothing applications
@ 2024-10-04 13:48 Igor Gutorov
2024-10-05 17:27 ` Morten Brørup
0 siblings, 1 reply; 2+ messages in thread
From: Igor Gutorov @ 2024-10-04 13:48 UTC (permalink / raw)
To: users; +Cc: andrew.rybchenko, mb
Hi,
I'm a bit confused about certain semantics of `cache_size` in memory
pools. I'm working on a DPDK application where each Rx queue gets its
own mbuf mempool. The memory pools are never shared between lcores,
mbufs are never passed between lcores, and so the deallocation of an
mbuf will happen on the same lcore where it was allocated on (it is a
run-to-completion application). Is my understanding correct, that this
completely eliminates any lock contention, and so `cache_size` can
safely be set to 0?
Also, `rte_pktmbuf_pool_create()` internally calls
`rte_mempool_create()` with the default `flags`. Would there be a
performance benefit in creating mempools manually with the
`RTE_MEMPOOL_F_SP_PUT` and `RTE_MEMPOOL_F_SC_GET` flags set?
Thanks!
Sincerely,
Igor.
^ permalink raw reply [flat|nested] 2+ messages in thread
* RE: Mbuf pool cache size in share-nothing applications
2024-10-04 13:48 Mbuf pool cache size in share-nothing applications Igor Gutorov
@ 2024-10-05 17:27 ` Morten Brørup
0 siblings, 0 replies; 2+ messages in thread
From: Morten Brørup @ 2024-10-05 17:27 UTC (permalink / raw)
To: Igor Gutorov, users; +Cc: andrew.rybchenko
> From: Igor Gutorov [mailto:igootorov@gmail.com]
> Sent: Friday, 4 October 2024 15.49
>
> Hi,
>
> I'm a bit confused about certain semantics of `cache_size` in memory
> pools. I'm working on a DPDK application where each Rx queue gets its
> own mbuf mempool. The memory pools are never shared between lcores,
> mbufs are never passed between lcores, and so the deallocation of an
> mbuf will happen on the same lcore where it was allocated on (it is a
> run-to-completion application). Is my understanding correct, that this
> completely eliminates any lock contention, and so `cache_size` can
> safely be set to 0?
Correct.
However, accessing objects in the cache is faster than accessing objects in the backing pool, because the cache is accessed through optimized inline functions.
>
> Also, `rte_pktmbuf_pool_create()` internally calls
> `rte_mempool_create()` with the default `flags`. Would there be a
> performance benefit in creating mempools manually with the
> `RTE_MEMPOOL_F_SP_PUT` and `RTE_MEMPOOL_F_SC_GET` flags set?
If you want higher performance, create the mbuf pools with a large cache. Then your application will rarely access the mempool's backend, so its flags have less significance.
-Morten
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2024-10-05 17:27 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-10-04 13:48 Mbuf pool cache size in share-nothing applications Igor Gutorov
2024-10-05 17:27 ` Morten Brørup
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).