DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] l2fwd/l3fwd performance drop of about 25% ?
@ 2014-02-25 21:01 Jun Han
  2014-02-27  9:36 ` Richardson, Bruce
  0 siblings, 1 reply; 2+ messages in thread
From: Jun Han @ 2014-02-25 21:01 UTC (permalink / raw)
  To: dev

Hi all,

I have a quick question regarding the performance of DPDK l2fwd (Same
problem with l3fwd). I am seeing that when we start multiple ports (e.g.,
12 ports), for 64 byte packets, the RX rate is only at around 11 Mpps per
port, instead of 14.88 Mpps which is the line rate (with preablem+start of
delimeter + interframe gap). Do you know what could be the problem? I am
describing my experiment setup below.

Setup:
1. We have Intel Xeon E5-2680 (8 cores, 2.7GHz) dual socket, with 6x10GbE
Intel 82599EB dualport NICs (total of 12 ports). Machine A runs your
pktgen, and Machine B runs DPDK l2fwd, unmodified.

2. We are running pktgen on Machine A with the following command:
./app/build/pktgen -c ffff -n 4 --proc-type auto --socket-mem 1024,1024
--file-prefix pg -- -p 0xfff0 -P -m "1.0, 2.1, 3.2, 4.3, 8.4, 9.5, 10.6,
11.7, 12.8, 13.9, 14.10, 15.11"

3. We are running l2fwd on Machine B with the following command:
sudo ./build/l2fwd -c 0xff0f -n 4 -- -p 0xfff


Thank you very much in advance.
Jun

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: [dpdk-dev] l2fwd/l3fwd performance drop of about 25% ?
  2014-02-25 21:01 [dpdk-dev] l2fwd/l3fwd performance drop of about 25% ? Jun Han
@ 2014-02-27  9:36 ` Richardson, Bruce
  0 siblings, 0 replies; 2+ messages in thread
From: Richardson, Bruce @ 2014-02-27  9:36 UTC (permalink / raw)
  To: Jun Han, dev

> Hi all,
> 
> I have a quick question regarding the performance of DPDK l2fwd (Same
> problem with l3fwd). I am seeing that when we start multiple ports (e.g.,
> 12 ports), for 64 byte packets, the RX rate is only at around 11 Mpps per
> port, instead of 14.88 Mpps which is the line rate (with preablem+start of
> delimeter + interframe gap). Do you know what could be the problem? I am
> describing my experiment setup below.

When using both ports on a dual-port 10G NIC, you cannot hit line rate on both ports with 64-byte packets due to a lack of PCI bandwidth (for PCI Gen2 x8). If you try with two ports on different NICs you should be getting better performance, as traffic for small packets for a single port at  10G can be handled by the PCI slot. With larger packet sizes, e.g. 128 byte packets, the issue also goes away and the NICs can manage full line rate.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2014-02-27  9:35 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-02-25 21:01 [dpdk-dev] l2fwd/l3fwd performance drop of about 25% ? Jun Han
2014-02-27  9:36 ` Richardson, Bruce

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).