DPDK patches and discussions
 help / color / mirror / Atom feed
From: "De Lara Guarch, Pablo" <pablo.de.lara.guarch@intel.com>
To: Pavel Odintsov <pavel.odintsov@gmail.com>,
	Paul Emmerich <emmericp@net.in.tum.de>
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] Performance regression in DPDK 1.8/2.0
Date: Mon, 27 Apr 2015 17:38:38 +0000	[thread overview]
Message-ID: <E115CCD9D858EF4F90C690B0DCB4D89727297ADB@IRSMSX108.ger.corp.intel.com> (raw)
In-Reply-To: <CALgsdbciDyJLm8Rp9GNbDb6sN=eCcFEVkWsSw95fvG32mYfqBg@mail.gmail.com>

Hi,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Pavel Odintsov
> Sent: Monday, April 27, 2015 9:07 AM
> To: Paul Emmerich
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] Performance regression in DPDK 1.8/2.0
> 
> Hello!
> 
> I executed deep test of Paul's toolkit and could approve performance
> degradation in 2.0.0.
> 
> On Sun, Apr 26, 2015 at 9:50 PM, Paul Emmerich <emmericp@net.in.tum.de>
> wrote:
> > Hi,
> >
> > I'm working on a DPDK-based packet generator [1] and I recently tried to
> > upgrade from DPDK 1.7.1 to 2.0.0.
> > However, I noticed that DPDK 1.7.1 is about 25% faster than 2.0.0 for my
> use
> > case.
> >
> > So I ran some basic performance tests on the l2fwd example with DPDK
> 1.7.1,
> > 1.8.0 and 2.0.0.
> > I used an Intel Xeon E5-2620 v3 CPU clocked down to 1.2 GHz in order to
> > ensure that the CPU and not the network bandwidth is the bottleneck.
> > I configured l2fwd to forward between two interfaces of an X540 NIC using
> > only a single CPU core (-q2) and measured the following throughput under
> > full bidirectional load:
> >
> >
> > Version  TP [Mpps] Cycles/Pkt
> > 1.7.1    18.84     84.925690021
> > 1.8.0    16.78     95.351609058
> > 2.0.0    16.40     97.56097561
> >
> > DPDK 1.7.1 is about 15% faster in this scenario. The obvious suspect is the
> > new mbuf structure introduced in DPDK 1.8, so I profiled L1 cache misses:
> >
> > Version   L1 miss ratio
> > 1.7.1     6.5%
> > 1.8.0    13.8%
> > 2.0.0    13.4%
> >
> >
> > FWIW the performance results with my packet generator on the same 1.2
> GHz
> > CPU core are:
> >
> > Version  TP [Mpps]  L1 cache miss ratio
> > 1.7      11.77      4.3%
> > 2.0      9.5        8.4%

Could you tell me how you got the L1 cache miss ratio? Perf?
> >
> >
> > The discussion about the original patch [2] which introduced the new mbuf
> > structure addresses this potential performance degradation and mentions
> that
> > it is somehow mitigated.
> > It even claims a 20% *increase* in performance in a specific scenario.
> > However, that doesn't seem to be the case for both l2fwd and my packet
> > generator.
> >
> > Any ideas how to fix this? A 25% loss in throughput prevents me from
> > upgrading to DPDK 2.0.0. I need the new lcore features and the 40 GBit
> > driver updates, so I can't stay on 1.7.1 forever.

Could you provide more information on how you run the l2fwd app,
in order to try to reproduce the issue:
- L2fwd Command line
- L2fwd initialization (to check memory/CPU/NICs)

Did you change the l2fwd app between versions? L2fwd uses simple rx on 1.7.1,
whereas it uses vector rx on 2.0 (enable IXGBE_DEBUG_INIT to check it).

Last question, I assume you use your traffic generator to get all those numbers.
Which packet format/size did you use? Does your traffic generator take into account the Inter-packet gap?

Thanks!

Pablo
> >
> > Paul
> >
> >
> > [1] https://github.com/emmericp/MoonGen
> > [2]
> http://comments.gmane.org/gmane.comp.networking.dpdk.devel/5155
> 
> 
> 
> --
> Sincerely yours, Pavel Odintsov

  reply	other threads:[~2015-04-27 17:38 UTC|newest]

Thread overview: 13+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-04-26 18:50 Paul Emmerich
2015-04-27  8:06 ` Pavel Odintsov
2015-04-27 17:38   ` De Lara Guarch, Pablo [this message]
2015-04-27 22:28     ` Paul Emmerich
2015-04-28  5:50       ` Matthew Hall
2015-04-28 10:56         ` Paul Emmerich
2015-04-28 10:43       ` Paul Emmerich
2015-04-28 10:55         ` Bruce Richardson
2015-04-28 11:32           ` De Lara Guarch, Pablo
2015-04-28 10:58       ` Bruce Richardson
2015-04-28 11:31       ` De Lara Guarch, Pablo
2015-04-28 11:48         ` Paul Emmerich
2015-05-05 14:56           ` De Lara Guarch, Pablo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=E115CCD9D858EF4F90C690B0DCB4D89727297ADB@IRSMSX108.ger.corp.intel.com \
    --to=pablo.de.lara.guarch@intel.com \
    --cc=dev@dpdk.org \
    --cc=emmericp@net.in.tum.de \
    --cc=pavel.odintsov@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).