DPDK usage discussions
 help / color / mirror / Atom feed
* Re: [dpdk-users] Bigger mempool leads to worst performances.
@ 2020-06-18 10:26 Benoit Ganne (bganne)
  0 siblings, 0 replies; 2+ messages in thread
From: Benoit Ganne (bganne) @ 2020-06-18 10:26 UTC (permalink / raw)
  To: users; +Cc: contact

> It looks like that in our environment increasing the mempool reduce the
> capture performance, any suggestion on what I might look at to
> troubleshoot the problem? Apparently we can't go beyond 4GiB mempool
> without performance penalties.
> (Please note that 1GiB hugepage are configured to serve all the require
> additional memory.)

I'd bet it on page walks because of TLB misses: if I am not mistaken, on Intel you only have 4 TLB entries for 1GB hugepages, and prior to Skylake you have no victim cache. So, as soon as you use more than 4 x 1GB hugepages, you'll start triggering page walks because all your pages do no longer fit in the TLB.
You should be able to check that with 'perf stat -e dTLB-loads,dTLB-load-misses,dTLB-stores,dTLB-store-misses -a -I1000' or similar.

Best
ben

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-users] Bigger mempool leads to worst performances.
@ 2020-06-17 14:02 Filip Janiszewski
  0 siblings, 0 replies; 2+ messages in thread
From: Filip Janiszewski @ 2020-06-17 14:02 UTC (permalink / raw)
  To: users

Hi All,

I'm very aware the question is generic, but we can't really understand
what would be the problem here..

In short, we've a capture software running smoothly at 20GbE and
capturing everything, recently we've switched gear and increased the
amount of data, and encountered some drops.

One of the first ideas was that increasing the mempool for the port
would lead to some performance benefit or worst case scenario to no
change in the drop rate, but, unexpectedly we started dropping
*substantially more* packets..

It looks like that in our environment increasing the mempool reduce the
capture performance, any suggestion on what I might look at to
troubleshoot the problem? Apparently we can't go beyond 4GiB mempool
without performance penalties.

(Please note that 1GiB hugepage are configured to serve all the require
additional memory.)

Thanks

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-06-18 10:26 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-18 10:26 [dpdk-users] Bigger mempool leads to worst performances Benoit Ganne (bganne)
  -- strict thread matches above, loose matches on Subject: below --
2020-06-17 14:02 Filip Janiszewski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).