* [dpdk-users] Small constant frame loss only for more than 5 seconds of traffic at 10Gbps
@ 2017-09-12 15:40 DESBRUS Maxime
0 siblings, 0 replies; only message in thread
From: DESBRUS Maxime @ 2017-09-12 15:40 UTC (permalink / raw)
To: users
Hello
I am trying to capture 60 B Ethernet frames at link speed on two 10GbE ports with DPDK 17.08 on Linux 4.4 and an Intel XL710 card.
I use a slightly modified version of the l2fwd example program, that only counts incoming frames and immediately releases the mbufs.
I see a strange behavior because I can capture without any loss for 5s of traffic, but for more than that I always lose a few frames (about 15 000). For 10, 30 or 60 seconds of traffic the number of lost frames does not change much (it is a constant, and not a percentage of the incoming traffic).
I did apply all the recommendations to eliminate perturbations on the working cores:
* isolate cores with isolcpus, nohz_full, rcu_nocbs boot parameters
* disable hyper threading
* disable NMI watchdog
* disable pstate frequency scaling
* disable pause frames on both sides
I also tried to tweak a few things:
* increase/decrease mbuf pool size
* use a dedicated rte_mempool object per port
* increase/decrease packet burst length
* increase/decrease rx descriptor count
Although some of these did decrease loss a bit, I still get a few thousands lost frames for more than 5 seconds of incoming traffic.
I don't understand why performance seem to change only after a few seconds of traffic.
Any ideas?
Thanks
--
Maxime DESBRUS
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2017-09-12 15:40 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-12 15:40 [dpdk-users] Small constant frame loss only for more than 5 seconds of traffic at 10Gbps DESBRUS Maxime
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).