From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 06E4AC520 for ; Tue, 28 Apr 2015 13:31:12 +0200 (CEST) Received: from orsmga003.jf.intel.com ([10.7.209.27]) by orsmga101.jf.intel.com with ESMTP; 28 Apr 2015 04:31:10 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,663,1422950400"; d="scan'208";a="563011323" Received: from irsmsx101.ger.corp.intel.com ([163.33.3.153]) by orsmga003.jf.intel.com with ESMTP; 28 Apr 2015 04:31:10 -0700 Received: from irsmsx108.ger.corp.intel.com ([169.254.11.246]) by IRSMSX101.ger.corp.intel.com ([163.33.3.153]) with mapi id 14.03.0224.002; Tue, 28 Apr 2015 12:31:08 +0100 From: "De Lara Guarch, Pablo" To: Paul Emmerich Thread-Topic: [dpdk-dev] Performance regression in DPDK 1.8/2.0 Thread-Index: AQHQgFHZ4mseU3xXgkWntFvhqktdlJ1gcNwAgACZkhCAAFcpAIAA5qpw Date: Tue, 28 Apr 2015 11:31:07 +0000 Message-ID: References: <6DC6DE50-F94F-419C-98DF-3AD8DCD4F69D@net.in.tum.de> <23D2CA18-1875-4182-8DEE-9F6393011D2C@net.in.tum.de> In-Reply-To: <23D2CA18-1875-4182-8DEE-9F6393011D2C@net.in.tum.de> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [163.33.239.181] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] Performance regression in DPDK 1.8/2.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 28 Apr 2015 11:31:13 -0000 > -----Original Message----- > From: Paul Emmerich [mailto:emmericp@net.in.tum.de] > Sent: Monday, April 27, 2015 11:29 PM > To: De Lara Guarch, Pablo > Cc: Pavel Odintsov; dev@dpdk.org > Subject: Re: [dpdk-dev] Performance regression in DPDK 1.8/2.0 >=20 > Hi, >=20 > Pablo : > > Could you tell me how you got the L1 cache miss ratio? Perf? >=20 > perf stat -e L1-dcache-loads,L1-dcache-misses l2fwd ... >=20 >=20 > > Could you provide more information on how you run the l2fwd app, > > in order to try to reproduce the issue: > > - L2fwd Command line >=20 > ./build/l2fwd -c 3 -n 2 -- -p 3 -q 2 >=20 >=20 > > - L2fwd initialization (to check memory/CPU/NICs) >=20 > I unfortunately did not save the output, but I wrote down the important > parts: >=20 > 1.7.1: no output regarding rx/tx code paths as init debug wasn't enabled > 1.8.0 and 2.0.0: simple tx code path, vector rx >=20 >=20 > Hardware: >=20 > CPU: Intel(R) Xeon(R) CPU E3-1230 v2 > TurboBoost and HyperThreading disabled. > Frequency fixed at 3.30 GHz via acpi_cpufreq. >=20 > NIC: X540-T2 >=20 > Memory: Dual Channel DDR3 1333 MHz, 4x 4GB >=20 > > Did you change the l2fwd app between versions? L2fwd uses simple rx on > 1.7.1, > > whereas it uses vector rx on 2.0 (enable IXGBE_DEBUG_INIT to check it). >=20 > Yes, I had to update l2fwd when going from 1.7.1 to 1.8.0. However, the > changes in the app were minimal. Could you tell me which changes you made here? I see you are using simple t= x code path on 1.8.0,=20 but with the default values, you should be using vector tx,=20 unless you have changed anything in the tx configuration. Not sure also if you are using simple tx code path on 1.7.1 then, plus scat= tered rx. (Without changing the l2fwd app, I use scattered rx and vector tx). Thanks! Pablo >=20 > 1.8.0 and 2.0.0 used vector rx. Disabling vector rx via DPDK .config file > causes another 30% performance loss so I kept it enabled. >=20 >=20 >=20 > > Which packet format/size did you use? Does your traffic generator take > into account the Inter-packet gap? >=20 > 64 Byte packets, full line rate on both ports, i.e. 14.88 Mpps per port. > The packet's content doesn't matter as l2fwd doesn't look at it, but it w= as > just some random stuff: EthType 0x1234. >=20 >=20 > Let me know if you need any additional information. > I'd also be interested in the configuration that resulted in the 20% spee= d- > up that was mentioned in the original mbuf patch >=20 > Paul