* [dpdk-users] Is anybody test rte_mbuf_free performance?
@ 2016-08-22 12:37 forsakening
2016-08-22 17:40 ` Andriy Berestovskyy
0 siblings, 1 reply; 2+ messages in thread
From: forsakening @ 2016-08-22 12:37 UTC (permalink / raw)
To: users
Hi Everyone:
I found rte_pktmbuf_free not performe so well when using greater than 12 parallel caller。
Is anyone meet the same problem ?Thanks for giving a direction.
forsakening@sina.cn
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [dpdk-users] Is anybody test rte_mbuf_free performance?
2016-08-22 12:37 [dpdk-users] Is anybody test rte_mbuf_free performance? forsakening
@ 2016-08-22 17:40 ` Andriy Berestovskyy
0 siblings, 0 replies; 2+ messages in thread
From: Andriy Berestovskyy @ 2016-08-22 17:40 UTC (permalink / raw)
To: forsakening; +Cc: users
Hi,
Here are few suggestions for you:
1. Try to increase mempool cache.
2. Try to split the load across few mempools.
3. Try to free in bulks as many PMDs do.
Regards,
Andriy
On Mon, Aug 22, 2016 at 2:37 PM, forsakening@sina.cn
<forsakening@sina.cn> wrote:
> Hi Everyone:
> I found rte_pktmbuf_free not performe so well when using greater than 12 parallel caller。
> Is anyone meet the same problem ?Thanks for giving a direction.
>
>
>
> forsakening@sina.cn
--
Andriy Berestovskyy
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2016-08-22 17:40 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-08-22 12:37 [dpdk-users] Is anybody test rte_mbuf_free performance? forsakening
2016-08-22 17:40 ` Andriy Berestovskyy
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).