From mboxrd@z Thu Jan  1 00:00:00 1970
Return-Path: <konstantin.ananyev@intel.com>
Received: from mga14.intel.com (mga14.intel.com [192.55.52.115])
 by dpdk.org (Postfix) with ESMTP id 649B85A15
 for <dev@dpdk.org>; Wed, 21 Jan 2015 13:03:13 +0100 (CET)
Received: from fmsmga001.fm.intel.com ([10.253.24.23])
 by fmsmga103.fm.intel.com with ESMTP; 21 Jan 2015 03:57:29 -0800
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.09,441,1418112000"; d="scan'208";a="654222991"
Received: from irsmsx110.ger.corp.intel.com ([163.33.3.25])
 by fmsmga001.fm.intel.com with ESMTP; 21 Jan 2015 04:03:10 -0800
Received: from irsmsx105.ger.corp.intel.com ([169.254.7.81]) by
 irsmsx110.ger.corp.intel.com ([169.254.15.8]) with mapi id 14.03.0195.001;
 Wed, 21 Jan 2015 12:02:58 +0000
From: "Ananyev, Konstantin" <konstantin.ananyev@intel.com>
To: "Wang, Zhihong" <zhihong.wang@intel.com>, "Richardson, Bruce"
 <bruce.richardson@intel.com>, Neil Horman <nhorman@tuxdriver.com>
Thread-Topic: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
Thread-Index: AQHQM4rTAN1OIM5wWEKfjUL+RDqNWpzHaTyAgADqhgCAAMvWAIAAEcSAgADApYCAAIrt4A==
Date: Wed, 21 Jan 2015 12:02:57 +0000
Message-ID: <2601191342CEEE43887BDE71AB977258213DE922@irsmsx105.ger.corp.intel.com>
References: <1421632414-10027-1-git-send-email-zhihong.wang@intel.com>
 <20150119130221.GB21790@hmsreliant.think-freely.org>
 <F60F360A2500CD45ACDB1D700268892D0E75EFFE@SHSMSX101.ccr.corp.intel.com>
 <20150120151118.GD18449@hmsreliant.think-freely.org>
 <20150120161453.GA5316@bricha3-MOBL3>
 <F60F360A2500CD45ACDB1D700268892D0E75F664@SHSMSX101.ccr.corp.intel.com>
In-Reply-To: <F60F360A2500CD45ACDB1D700268892D0E75F664@SHSMSX101.ccr.corp.intel.com>
Accept-Language: en-IE, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-originating-ip: [163.33.239.181]
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
Cc: "dev@dpdk.org" <dev@dpdk.org>
Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
X-BeenThere: dev@dpdk.org
X-Mailman-Version: 2.1.15
Precedence: list
List-Id: patches and discussions about DPDK <dev.dpdk.org>
List-Unsubscribe: <http://dpdk.org/ml/options/dev>,
 <mailto:dev-request@dpdk.org?subject=unsubscribe>
List-Archive: <http://dpdk.org/ml/archives/dev/>
List-Post: <mailto:dev@dpdk.org>
List-Help: <mailto:dev-request@dpdk.org?subject=help>
List-Subscribe: <http://dpdk.org/ml/listinfo/dev>,
 <mailto:dev-request@dpdk.org?subject=subscribe>
X-List-Received-Date: Wed, 21 Jan 2015 12:03:14 -0000



> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, Zhihong
> Sent: Wednesday, January 21, 2015 3:44 AM
> To: Richardson, Bruce; Neil Horman
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
>=20
>=20
>=20
> > -----Original Message-----
> > From: Richardson, Bruce
> > Sent: Wednesday, January 21, 2015 12:15 AM
> > To: Neil Horman
> > Cc: Wang, Zhihong; dev@dpdk.org
> > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> >
> > On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote:
> > > On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong wrote:
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Neil Horman [mailto:nhorman@tuxdriver.com]
> > > > > Sent: Monday, January 19, 2015 9:02 PM
> > > > > To: Wang, Zhihong
> > > > > Cc: dev@dpdk.org
> > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization
> > > > >
> > > > > On Mon, Jan 19, 2015 at 09:53:30AM +0800, zhihong.wang@intel.com
> > wrote:
> > > > > > This patch set optimizes memcpy for DPDK for both SSE and AVX
> > platforms.
> > > > > > It also extends memcpy test coverage with unaligned cases and
> > > > > > more test
> > > > > points.
> > > > > >
> > > > > > Optimization techniques are summarized below:
> > > > > >
> > > > > > 1. Utilize full cache bandwidth
> > > > > >
> > > > > > 2. Enforce aligned stores
> > > > > >
> > > > > > 3. Apply load address alignment based on architecture features
> > > > > >
> > > > > > 4. Make load/store address available as early as possible
> > > > > >
> > > > > > 5. General optimization techniques like inlining, branch
> > > > > > reducing, prefetch pattern access
> > > > > >
> > > > > > Zhihong Wang (4):
> > > > > >   Disabled VTA for memcpy test in app/test/Makefile
> > > > > >   Removed unnecessary test cases in test_memcpy.c
> > > > > >   Extended test coverage in test_memcpy_perf.c
> > > > > >   Optimized memcpy in arch/x86/rte_memcpy.h for both SSE and AV=
X
> > > > > >     platforms
> > > > > >
> > > > > >  app/test/Makefile                                  |   6 +
> > > > > >  app/test/test_memcpy.c                             |  52 +-
> > > > > >  app/test/test_memcpy_perf.c                        | 238 +++++=
---
> > > > > >  .../common/include/arch/x86/rte_memcpy.h           | 664
> > > > > +++++++++++++++------
> > > > > >  4 files changed, 656 insertions(+), 304 deletions(-)
> > > > > >
> > > > > > --
> > > > > > 1.9.3
> > > > > >
> > > > > >
> > > > > Are you able to compile this with gcc 4.9.2?  The compilation of
> > > > > test_memcpy_perf is taking forever for me.  It appears hung.
> > > > > Neil
> > > >
> > > >
> > > > Neil,
> > > >
> > > > Thanks for reporting this!
> > > > It should compile but will take quite some time if the CPU doesn't =
support
> > AVX2, the reason is that:
> > > > 1. The SSE & AVX memcpy implementation is more complicated than
> > AVX2
> > > > version thus the compiler takes more time to compile and optimize 2=
.
> > > > The new test_memcpy_perf.c contains 126 constants memcpy calls for
> > > > better test case coverage, that's quite a lot
> > > >
> > > > I've just tested this patch on an Ivy Bridge machine with GCC 4.9.2=
:
> > > > 1. The whole compile process takes 9'41" with the original
> > > > test_memcpy_perf.c (63 + 63 =3D 126 constant memcpy calls) 2. It ta=
kes
> > > > only 2'41" after I reduce the constant memcpy call number to 12 + 1=
2
> > > > =3D 24
> > > >
> > > > I'll reduce memcpy call in the next version of patch.
> > > >
> > > ok, thank you.  I'm all for optimzation, but I think a compile that
> > > takes almost
> > > 10 minutes for a single file is going to generate some raised eyebrow=
s
> > > when end users start tinkering with it
> > >
> > > Neil
> > >
> > > > Zhihong (John)
> > > >
> > Even two minutes is a very long time to compile, IMHO. The whole of DPD=
K
> > doesn't take that long to compile right now, and that's with a couple o=
f huge
> > header files with routing tables in it. Any chance you could cut compil=
e time
> > down to a few seconds while still having reasonable tests?
> > Also, when there is AVX2 present on the system, what is the compile tim=
e
> > like for that code?
> >
> > 	/Bruce
>=20
> Neil, Bruce,
>=20
> Some data first.
>=20
> Sandy Bridge without AVX2:
> 1. original w/ 10 constant memcpy: 2'25"
> 2. patch w/ 12 constant memcpy: 2'41"
> 3. patch w/ 63 constant memcpy: 9'41"
>=20
> Haswell with AVX2:
> 1. original w/ 10 constant memcpy: 1'57"
> 2. patch w/ 12 constant memcpy: 1'56"
> 3. patch w/ 63 constant memcpy: 3'16"
>=20
> Also, to address Bruce's question, we have to reduce test case to cut dow=
n compile time. Because we use:
> 1. intrinsics instead of assembly for better flexibility and can utilize =
more compiler optimization
> 2. complex function body for better performance
> 3. inlining
> This increases compile time.

We use instrincts and inlining in many other places too.
Why it suddenly became a problem here?
Konstantin

> But I think it'd be okay to do that as long as we can select a fair set o=
f test points.
>=20
> It'd be great if you could give some suggestion, say, 12 points.
>=20
> Zhihong (John)
>=20
>=20
>=20