DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] eventdev performance
@ 2018-08-05 19:03 Anthony Hart
  2018-08-07  8:34 ` Van Haaren, Harry
  0 siblings, 1 reply; 5+ messages in thread
From: Anthony Hart @ 2018-08-05 19:03 UTC (permalink / raw)
  To: users

I’ve been doing some performance measurements with the eventdev_pipeline example application (to see how the eventdev library performs - dpdk 18.05) and I’m looking for some help in determining where the bottlenecks are in my testing.

I have 1 Rx, 1 Tx, 1 Scheduler and N worker cores (1 sw_event0 device).   In this configuration performance tops out with 3 workers (6 cores total) and adding more workers actually causes a reduction in throughput.   In my setup this is about 12Mpps.   The same setup running testpmd will reach >25Mpps using only 1 core.


This is the eventdev command line.
eventdev_pipeline -l 0,1-6 -w0000:02:00.0 --vdev event_sw0 -- -r2 -t4 -e8 -w70 -s1 -n0 -c128 -W0 -D

This is the tested command line.
testpmd -w0000:02:00.0 -l 0,1 -- -i --nb-core 1 --numa --rxq 1 --txq 1 --port-topology=loop


I’m guessing that its either the RX or Sched that’s the bottleneck in my eventdev_pipeline setup.  

So I first tried to use 2 cores for RX (-r6), performance went down.   It seems that configuring 2 RX cores still only sets up 1 h/w receive ring and access to that one ring is alternated between the two cores?    So that doesn’t help.

Next, I could use 2 scheduler cores,  but how does that work, do they again alternate?   In any case throughput is reduced by 50% in that test.



thanks for any insights,
tony

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2018-08-20 16:05 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-08-05 19:03 [dpdk-users] eventdev performance Anthony Hart
2018-08-07  8:34 ` Van Haaren, Harry
2018-08-09 15:56   ` Anthony Hart
2018-08-15 16:04     ` Van Haaren, Harry
2018-08-20 16:05       ` Anthony Hart

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).