DPDK usage discussions
 help / color / Atom feed
* Re: [dpdk-users] Bigger mempool leads to worst performances.
@ 2020-06-18 10:26 Benoit Ganne (bganne)
  0 siblings, 0 replies; 2+ messages in thread
From: Benoit Ganne (bganne) @ 2020-06-18 10:26 UTC (permalink / raw)
  To: users; +Cc: contact

> It looks like that in our environment increasing the mempool reduce the
> capture performance, any suggestion on what I might look at to
> troubleshoot the problem? Apparently we can't go beyond 4GiB mempool
> without performance penalties.
> (Please note that 1GiB hugepage are configured to serve all the require
> additional memory.)

I'd bet it on page walks because of TLB misses: if I am not mistaken, on Intel you only have 4 TLB entries for 1GB hugepages, and prior to Skylake you have no victim cache. So, as soon as you use more than 4 x 1GB hugepages, you'll start triggering page walks because all your pages do no longer fit in the TLB.
You should be able to check that with 'perf stat -e dTLB-loads,dTLB-load-misses,dTLB-stores,dTLB-store-misses -a -I1000' or similar.

Best
ben

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [dpdk-users] Bigger mempool leads to worst performances.
@ 2020-06-17 14:02 Filip Janiszewski
  0 siblings, 0 replies; 2+ messages in thread
From: Filip Janiszewski @ 2020-06-17 14:02 UTC (permalink / raw)
  To: users

Hi All,

I'm very aware the question is generic, but we can't really understand
what would be the problem here..

In short, we've a capture software running smoothly at 20GbE and
capturing everything, recently we've switched gear and increased the
amount of data, and encountered some drops.

One of the first ideas was that increasing the mempool for the port
would lead to some performance benefit or worst case scenario to no
change in the drop rate, but, unexpectedly we started dropping
*substantially more* packets..

It looks like that in our environment increasing the mempool reduce the
capture performance, any suggestion on what I might look at to
troubleshoot the problem? Apparently we can't go beyond 4GiB mempool
without performance penalties.

(Please note that 1GiB hugepage are configured to serve all the require
additional memory.)

Thanks

-- 
BR, Filip
+48 666 369 823

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, back to index

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-06-18 10:26 [dpdk-users] Bigger mempool leads to worst performances Benoit Ganne (bganne)
  -- strict thread matches above, loose matches on Subject: below --
2020-06-17 14:02 Filip Janiszewski

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox