DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Hanoch Haim (hhaim)" <hhaim@cisco.com>
To: "Mcnamara, John" <john.mcnamara@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>
Cc: "Ido Barnea (ibarnea)" <ibarnea@cisco.com>,
	"Hanoch Haim (hhaim)" <hhaim@cisco.com>
Subject: Re: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10%
Date: Tue, 14 Feb 2017 12:31:56 +0000	[thread overview]
Message-ID: <6ee7449acb434fafb80c5cb1b970be15@XCH-RTP-017.cisco.com> (raw)
In-Reply-To: <B27915DBBA3421428155699D51E4CFE2026CBEAB@IRSMSX103.ger.corp.intel.com>

Hi John, thank you for the fast response. 
I assume that Intel tests are more like rx->tx tests. 
In our case we are doing mostly tx, which is more similar to dpdk-pkt-gen 
The cases that we cached the mbuf was affected the most. 
We expect to see the same issue with a simple DPDK application

Thanks,
Hanoh

-----Original Message-----
From: Mcnamara, John [mailto:john.mcnamara@intel.com] 
Sent: Tuesday, February 14, 2017 2:19 PM
To: Hanoch Haim (hhaim); dev@dpdk.org
Cc: Ido Barnea (ibarnea)
Subject: RE: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10%



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Hanoch Haim 
> (hhaim)
> Sent: Tuesday, February 14, 2017 11:45 AM
> To: dev@dpdk.org
> Cc: Ido Barnea (ibarnea) <ibarnea@cisco.com>; Hanoch Haim (hhaim) 
> <hhaim@cisco.com>
> Subject: [dpdk-dev] DPDK 17.02 RC-3 performance degradation of ~10%
> 
> Hi,
> 
> We (trex traffic generator project) upgrade DPDK version from 16.07 to
> 17.02arc-3 and we experienced a performance degradation on the 
> following
> NICS:
> 
> XL710  : 10-15%
> ixgbe   :  8% in one case
> mlx5    : 8% 2 case
> X710    :  no impact (same driver as XL710)
> VIC       : no impact
> 
> It might be related to DPDK infra change as it affects more than one 
> driver (maybe mbuf?).
> Wanted to know if this is expected before investing more into this. 
> The Y axis numbers in all the following charts (from Grafana) are 
> MPPS/core which means how many cycles per packet the CPU invest. 
> Bigger MPPS/core means less CPU cycles to send packet.

Hi,

Thanks for the update. From a quick check with the Intel test team they haven't seen this. They would have flagged it if they had. Maybe someone from Mellanox/Netronome could confirm as well.

Could you do a git-bisect to identify the change that caused this?

John

  reply	other threads:[~2017-02-14 12:32 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-02-14 11:44 Hanoch Haim (hhaim)
2017-02-14 12:19 ` Mcnamara, John
2017-02-14 12:31   ` Hanoch Haim (hhaim) [this message]
2017-02-14 13:04   ` Thomas Monjalon
2017-02-14 13:28     ` De Lara Guarch, Pablo
2017-02-14 13:38       ` De Lara Guarch, Pablo
2017-02-15 12:35     ` Hanoch Haim (hhaim)
2017-06-21 21:24 Gudimetla, Leela Sankar
2017-06-21 22:06 Gudimetla, Leela Sankar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=6ee7449acb434fafb80c5cb1b970be15@XCH-RTP-017.cisco.com \
    --to=hhaim@cisco.com \
    --cc=dev@dpdk.org \
    --cc=ibarnea@cisco.com \
    --cc=john.mcnamara@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).