From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <zhihong.wang@intel.com>
Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
 by dpdk.org (Postfix) with ESMTP id E143D1F5
 for <dev@dpdk.org>; Fri, 23 Jan 2015 07:52:09 +0100 (CET)
Received: from orsmga002.jf.intel.com ([10.7.209.21])
 by fmsmga103.fm.intel.com with ESMTP; 22 Jan 2015 22:46:19 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.09,453,1418112000"; d="scan'208";a="674619845"
Received: from kmsmsx151.gar.corp.intel.com ([172.21.73.86])
 by orsmga002.jf.intel.com with ESMTP; 22 Jan 2015 22:52:07 -0800
Received: from shsmsx104.ccr.corp.intel.com (10.239.4.70) by
 KMSMSX151.gar.corp.intel.com (172.21.73.86) with Microsoft SMTP Server (TLS)
 id 14.3.195.1; Fri, 23 Jan 2015 14:52:05 +0800
Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.64]) by
 SHSMSX104.ccr.corp.intel.com ([169.254.5.231]) with mapi id 14.03.0195.001;
 Fri, 23 Jan 2015 14:52:03 +0800
From: "Wang, Zhihong" <zhihong.wang@intel.com>
To: "Richardson, Bruce" <bruce.richardson@intel.com>, Marc Sune
 <marc.sune@bisdn.de>
Thread-Topic: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
Thread-Index: AQHQM+g93pBJjc9asEOmHYpPzg55ApzITFZAgADkUQ+AALooEIAAqi6qgAKzAaA=
Date: Fri, 23 Jan 2015 06:52:03 +0000
Message-ID: <F60F360A2500CD45ACDB1D700268892D0E760527@SHSMSX101.ccr.corp.intel.com>
References: <1421632414-10027-1-git-send-email-zhihong.wang@intel.com>
 <20150119130221.GB21790@hmsreliant.think-freely.org>
 <F60F360A2500CD45ACDB1D700268892D0E75EFFE@SHSMSX101.ccr.corp.intel.com>
 <20150120151118.GD18449@hmsreliant.think-freely.org>
 <20150120161453.GA5316@bricha3-MOBL3>
 <F60F360A2500CD45ACDB1D700268892D0E75F664@SHSMSX101.ccr.corp.intel.com>
 <54BF9D59.7070104@bisdn.de> <20150121130234.GB10756@bricha3-MOBL3>
 <54BFA7D5.7020106@bisdn.de> <20150121132620.GC10756@bricha3-MOBL3>
In-Reply-To: <20150121132620.GC10756@bricha3-MOBL3>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [10.239.127.40]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Fri, 23 Jan 2015 06:52:10 -0000



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce Richardson
> Sent: Wednesday, January 21, 2015 9:26 PM
> To: Marc Sune
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
>=20
> On Wed, Jan 21, 2015 at 02:21:25PM +0100, Marc Sune wrote:
> >
> > On 21/01/15 14:02, Bruce Richardson wrote:
> > >On Wed, Jan 21, 2015 at 01:36:41PM +0100, Marc Sune wrote:
> > >>On 21/01/15 04:44, Wang, Zhihong wrote:
> > >>>>-----Original Message-----
> > >>>>From: Richardson, Bruce
> > >>>>Sent: Wednesday, January 21, 2015 12:15 AM
> > >>>>To: Neil Horman
> > >>>>Cc: Wang, Zhihong; dev@dpdk.org
> > >>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > >>>>
> > >>>>On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote:
> > >>>>>On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong wrote:
> > >>>>>>>-----Original Message-----
> > >>>>>>>From: Neil Horman [mailto:nhorman@tuxdriver.com]
> > >>>>>>>Sent: Monday, January 19, 2015 9:02 PM
> > >>>>>>>To: Wang, Zhihong
> > >>>>>>>Cc: dev@dpdk.org
> > >>>>>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > >>>>>>>
> > >>>>>>>On Mon, Jan 19, 2015 at 09:53:30AM +0800,
> > >>>>>>>zhihong.wang@intel.com
> > >>>>wrote:
> > >>>>>>>>This patch set optimizes memcpy for DPDK for both SSE and AVX
> > >>>>platforms.
> > >>>>>>>>It also extends memcpy test coverage with unaligned cases and
> > >>>>>>>>more test
> > >>>>>>>points.
> > >>>>>>>>Optimization techniques are summarized below:
> > >>>>>>>>
> > >>>>>>>>1. Utilize full cache bandwidth
> > >>>>>>>>
> > >>>>>>>>2. Enforce aligned stores
> > >>>>>>>>
> > >>>>>>>>3. Apply load address alignment based on architecture features
> > >>>>>>>>
> > >>>>>>>>4. Make load/store address available as early as possible
> > >>>>>>>>
> > >>>>>>>>5. General optimization techniques like inlining, branch
> > >>>>>>>>reducing, prefetch pattern access
> > >>>>>>>>
> > >>>>>>>>Zhihong Wang (4):
> > >>>>>>>>   Disabled VTA for memcpy test in app/test/Makefile
> > >>>>>>>>   Removed unnecessary test cases in test_memcpy.c
> > >>>>>>>>   Extended test coverage in test_memcpy_perf.c
> > >>>>>>>>   Optimized memcpy in arch/x86/rte_memcpy.h for both SSE
> and AVX
> > >>>>>>>>     platforms
> > >>>>>>>>
> > >>>>>>>>  app/test/Makefile                                  |   6 +
> > >>>>>>>>  app/test/test_memcpy.c                             |  52 +-
> > >>>>>>>>  app/test/test_memcpy_perf.c                        | 238 ++++=
+---
> > >>>>>>>>  .../common/include/arch/x86/rte_memcpy.h           | 664
> > >>>>>>>+++++++++++++++------
> > >>>>>>>>  4 files changed, 656 insertions(+), 304 deletions(-)
> > >>>>>>>>
> > >>>>>>>>--
> > >>>>>>>>1.9.3
> > >>>>>>>>
> > >>>>>>>>
> > >>>>>>>Are you able to compile this with gcc 4.9.2?  The compilation
> > >>>>>>>of test_memcpy_perf is taking forever for me.  It appears hung.
> > >>>>>>>Neil
> > >>>>>>Neil,
> > >>>>>>
> > >>>>>>Thanks for reporting this!
> > >>>>>>It should compile but will take quite some time if the CPU
> > >>>>>>doesn't support
> > >>>>AVX2, the reason is that:
> > >>>>>>1. The SSE & AVX memcpy implementation is more complicated
> than
> > >>>>AVX2
> > >>>>>>version thus the compiler takes more time to compile and optimize
> 2.
> > >>>>>>The new test_memcpy_perf.c contains 126 constants memcpy calls
> > >>>>>>for better test case coverage, that's quite a lot
> > >>>>>>
> > >>>>>>I've just tested this patch on an Ivy Bridge machine with GCC 4.9=
.2:
> > >>>>>>1. The whole compile process takes 9'41" with the original
> > >>>>>>test_memcpy_perf.c (63 + 63 =3D 126 constant memcpy calls) 2. It
> > >>>>>>takes only 2'41" after I reduce the constant memcpy call number
> > >>>>>>to 12 + 12 =3D 24
> > >>>>>>
> > >>>>>>I'll reduce memcpy call in the next version of patch.
> > >>>>>>
> > >>>>>ok, thank you.  I'm all for optimzation, but I think a compile
> > >>>>>that takes almost
> > >>>>>10 minutes for a single file is going to generate some raised
> > >>>>>eyebrows when end users start tinkering with it
> > >>>>>
> > >>>>>Neil
> > >>>>>
> > >>>>>>Zhihong (John)
> > >>>>>>
> > >>>>Even two minutes is a very long time to compile, IMHO. The whole
> > >>>>of DPDK doesn't take that long to compile right now, and that's
> > >>>>with a couple of huge header files with routing tables in it. Any
> > >>>>chance you could cut compile time down to a few seconds while still
> having reasonable tests?
> > >>>>Also, when there is AVX2 present on the system, what is the
> > >>>>compile time like for that code?
> > >>>>
> > >>>>	/Bruce
> > >>>Neil, Bruce,
> > >>>
> > >>>Some data first.
> > >>>
> > >>>Sandy Bridge without AVX2:
> > >>>1. original w/ 10 constant memcpy: 2'25"
> > >>>2. patch w/ 12 constant memcpy: 2'41"
> > >>>3. patch w/ 63 constant memcpy: 9'41"
> > >>>
> > >>>Haswell with AVX2:
> > >>>1. original w/ 10 constant memcpy: 1'57"
> > >>>2. patch w/ 12 constant memcpy: 1'56"
> > >>>3. patch w/ 63 constant memcpy: 3'16"
> > >>>
> > >>>Also, to address Bruce's question, we have to reduce test case to cu=
t
> down compile time. Because we use:
> > >>>1. intrinsics instead of assembly for better flexibility and can
> > >>>utilize more compiler optimization 2. complex function body for
> > >>>better performance 3. inlining This increases compile time.
> > >>>But I think it'd be okay to do that as long as we can select a fair =
set of
> test points.
> > >>>
> > >>>It'd be great if you could give some suggestion, say, 12 points.
> > >>>
> > >>>Zhihong (John)
> > >>>
> > >>>
> > >>While I agree in the general case these long compilation times is
> > >>painful for the users, having a factor of 2-8x in memcpy operations
> > >>is quite an improvement, specially in DPDK applications which need
> > >>to deal
> > >>(unfortunately) heavily on them -- e.g. IP fragmentation and reassemb=
ly.
> > >>
> > >>Why not having a fast compilation by default, and a tunable config
> > >>flag to enable a highly optimized version of rte_memcpy (e.g.
> RTE_EAL_OPT_MEMCPY)?
> > >>
> > >>Marc
> > >>
> > >Out of interest, are these 2-8x improvements something you have
> > >benchmarked in these app scenarios? [i.e. not just in micro-benchmarks=
].
> >
> > How much that micro-speedup will end up affecting the performance of
> > the entire application is something I cannot say, so I agree that we
> > should probably have some additional benchmarks before deciding that
> > pays off maintaining 2 versions of rte_memcpy.
> >
> > There are however a bunch of possible DPDK applications that could
> > potentially benefit; IP fragmentation, tunneling and specialized DPI
> > applications, among others, since they involve a reasonable amount of
> > memcpys per pkt. My point was, *if* it proves that is enough
> > beneficial, why not having it optionally?
> >
> > Marc
>=20
> I agree, if it provides the speedups then we need to have it in - and qui=
te
> possibly on by default, even.
>=20
> /Bruce

Since we're clear now that the long compile time is mainly caused by too ma=
ny inline function calls, I think it's okay not to do this.
Would you agree?

Zhihong (John)