From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id E76B01F5 for ; Wed, 28 Jan 2015 03:08:39 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga103.fm.intel.com with ESMTP; 27 Jan 2015 18:02:29 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,478,1418112000"; d="scan'208";a="657670128" Received: from kmsmsx152.gar.corp.intel.com ([172.21.73.87]) by fmsmga001.fm.intel.com with ESMTP; 27 Jan 2015 18:08:36 -0800 Received: from shsmsx101.ccr.corp.intel.com (10.239.4.153) by KMSMSX152.gar.corp.intel.com (172.21.73.87) with Microsoft SMTP Server (TLS) id 14.3.195.1; Wed, 28 Jan 2015 10:06:59 +0800 Received: from shsmsx103.ccr.corp.intel.com ([169.254.4.192]) by SHSMSX101.ccr.corp.intel.com ([169.254.1.253]) with mapi id 14.03.0195.001; Wed, 28 Jan 2015 10:06:57 +0800 From: "Wang, Zhihong" To: "Ananyev, Konstantin" , "Richardson, Bruce" , 'Marc Sune' Thread-Topic: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization Thread-Index: AQHQM+g93pBJjc9asEOmHYpPzg55ApzHaTyAgADqhgCAAMvWAIAAEcSAgADApYCAAJS5gIAABzsAgAAFRYCAAAFfAIACtoGAgAV3QUCAAHuPgIAAo3CAgAAOctCAAOGdIA== Date: Wed, 28 Jan 2015 02:06:57 +0000 Message-ID: References: <1421632414-10027-1-git-send-email-zhihong.wang@intel.com> <20150119130221.GB21790@hmsreliant.think-freely.org> <20150120151118.GD18449@hmsreliant.think-freely.org> <20150120161453.GA5316@bricha3-MOBL3> <54BF9D59.7070104@bisdn.de> <20150121130234.GB10756@bricha3-MOBL3> <54BFA7D5.7020106@bisdn.de> <20150121132620.GC10756@bricha3-MOBL3> <2601191342CEEE43887BDE71AB977258213DFA32@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258213DFC4D@irsmsx105.ger.corp.intel.com> <2601191342CEEE43887BDE71AB977258213DFDEE@irsmsx105.ger.corp.intel.com> In-Reply-To: <2601191342CEEE43887BDE71AB977258213DFDEE@irsmsx105.ger.corp.intel.com> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "'dev@dpdk.org'" Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Wed, 28 Jan 2015 02:08:41 -0000 > -----Original Message----- > From: Ananyev, Konstantin > Sent: Tuesday, January 27, 2015 8:20 PM > To: Wang, Zhihong; Richardson, Bruce; 'Marc Sune' > Cc: 'dev@dpdk.org' > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization >=20 >=20 >=20 > > -----Original Message----- > > From: Ananyev, Konstantin > > Sent: Tuesday, January 27, 2015 11:30 AM > > To: Wang, Zhihong; Richardson, Bruce; Marc Sune > > Cc: dev@dpdk.org > > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > > -----Original Message----- > > > From: Wang, Zhihong > > > Sent: Tuesday, January 27, 2015 1:42 AM > > > To: Ananyev, Konstantin; Richardson, Bruce; Marc Sune > > > Cc: dev@dpdk.org > > > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > > > > > > -----Original Message----- > > > > From: Ananyev, Konstantin > > > > Sent: Tuesday, January 27, 2015 2:29 AM > > > > To: Wang, Zhihong; Richardson, Bruce; Marc Sune > > > > Cc: dev@dpdk.org > > > > Subject: RE: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > Hi Zhihong, > > > > > > > > > -----Original Message----- > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, > > > > > Zhihong > > > > > Sent: Friday, January 23, 2015 6:52 AM > > > > > To: Richardson, Bruce; Marc Sune > > > > > Cc: dev@dpdk.org > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Bruce > > > > > > Richardson > > > > > > Sent: Wednesday, January 21, 2015 9:26 PM > > > > > > To: Marc Sune > > > > > > Cc: dev@dpdk.org > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > > > > > On Wed, Jan 21, 2015 at 02:21:25PM +0100, Marc Sune wrote: > > > > > > > > > > > > > > On 21/01/15 14:02, Bruce Richardson wrote: > > > > > > > >On Wed, Jan 21, 2015 at 01:36:41PM +0100, Marc Sune wrote: > > > > > > > >>On 21/01/15 04:44, Wang, Zhihong wrote: > > > > > > > >>>>-----Original Message----- > > > > > > > >>>>From: Richardson, Bruce > > > > > > > >>>>Sent: Wednesday, January 21, 2015 12:15 AM > > > > > > > >>>>To: Neil Horman > > > > > > > >>>>Cc: Wang, Zhihong; dev@dpdk.org > > > > > > > >>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy > > > > > > > >>>>optimization > > > > > > > >>>> > > > > > > > >>>>On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wro= te: > > > > > > > >>>>>On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong > wrote: > > > > > > > >>>>>>>-----Original Message----- > > > > > > > >>>>>>>From: Neil Horman [mailto:nhorman@tuxdriver.com] > > > > > > > >>>>>>>Sent: Monday, January 19, 2015 9:02 PM > > > > > > > >>>>>>>To: Wang, Zhihong > > > > > > > >>>>>>>Cc: dev@dpdk.org > > > > > > > >>>>>>>Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy > > > > > > > >>>>>>>optimization > > > > > > > >>>>>>> > > > > > > > >>>>>>>On Mon, Jan 19, 2015 at 09:53:30AM +0800, > > > > > > > >>>>>>>zhihong.wang@intel.com > > > > > > > >>>>wrote: > > > > > > > >>>>>>>>This patch set optimizes memcpy for DPDK for both > > > > > > > >>>>>>>>SSE and AVX > > > > > > > >>>>platforms. > > > > > > > >>>>>>>>It also extends memcpy test coverage with unaligned > > > > > > > >>>>>>>>cases and more test > > > > > > > >>>>>>>points. > > > > > > > >>>>>>>>Optimization techniques are summarized below: > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>1. Utilize full cache bandwidth > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>2. Enforce aligned stores > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>3. Apply load address alignment based on > > > > > > > >>>>>>>>architecture features > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>4. Make load/store address available as early as > > > > > > > >>>>>>>>possible > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>5. General optimization techniques like inlining, > > > > > > > >>>>>>>>branch reducing, prefetch pattern access > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>Zhihong Wang (4): > > > > > > > >>>>>>>> Disabled VTA for memcpy test in app/test/Makefile > > > > > > > >>>>>>>> Removed unnecessary test cases in test_memcpy.c > > > > > > > >>>>>>>> Extended test coverage in test_memcpy_perf.c > > > > > > > >>>>>>>> Optimized memcpy in arch/x86/rte_memcpy.h for > > > > > > > >>>>>>>>both SSE > > > > > > and AVX > > > > > > > >>>>>>>> platforms > > > > > > > >>>>>>>> > > > > > > > >>>>>>>> app/test/Makefile = | 6 + > > > > > > > >>>>>>>> app/test/test_memcpy.c = | 52 +- > > > > > > > >>>>>>>> app/test/test_memcpy_perf.c = | 238 > +++++--- > > > > > > > >>>>>>>> .../common/include/arch/x86/rte_memcpy.h = | > 664 > > > > > > > >>>>>>>+++++++++++++++------ > > > > > > > >>>>>>>> 4 files changed, 656 insertions(+), 304 > > > > > > > >>>>>>>> deletions(-) > > > > > > > >>>>>>>> > > > > > > > >>>>>>>>-- > > > > > > > >>>>>>>>1.9.3 > > > > > > > >>>>>>>> > > > > > > > >>>>>>>> > > > > > > > >>>>>>>Are you able to compile this with gcc 4.9.2? The > > > > > > > >>>>>>>compilation of test_memcpy_perf is taking forever for > > > > > > > >>>>>>>me. It > > > > appears hung. > > > > > > > >>>>>>>Neil > > > > > > > >>>>>>Neil, > > > > > > > >>>>>> > > > > > > > >>>>>>Thanks for reporting this! > > > > > > > >>>>>>It should compile but will take quite some time if the > > > > > > > >>>>>>CPU doesn't support > > > > > > > >>>>AVX2, the reason is that: > > > > > > > >>>>>>1. The SSE & AVX memcpy implementation is more > > > > > > > >>>>>>complicated > > > > > > than > > > > > > > >>>>AVX2 > > > > > > > >>>>>>version thus the compiler takes more time to compile > > > > > > > >>>>>>and optimize > > > > > > 2. > > > > > > > >>>>>>The new test_memcpy_perf.c contains 126 constants > > > > > > > >>>>>>memcpy calls for better test case coverage, that's > > > > > > > >>>>>>quite a lot > > > > > > > >>>>>> > > > > > > > >>>>>>I've just tested this patch on an Ivy Bridge machine > > > > > > > >>>>>>with GCC > > > > 4.9.2: > > > > > > > >>>>>>1. The whole compile process takes 9'41" with the > > > > > > > >>>>>>original test_memcpy_perf.c (63 + 63 =3D 126 constant > memcpy calls) 2. > > > > > > > >>>>>>It takes only 2'41" after I reduce the constant memcpy > > > > > > > >>>>>>call number to 12 + 12 =3D 24 > > > > > > > >>>>>> > > > > > > > >>>>>>I'll reduce memcpy call in the next version of patch. > > > > > > > >>>>>> > > > > > > > >>>>>ok, thank you. I'm all for optimzation, but I think a > > > > > > > >>>>>compile that takes almost > > > > > > > >>>>>10 minutes for a single file is going to generate some > > > > > > > >>>>>raised eyebrows when end users start tinkering with it > > > > > > > >>>>> > > > > > > > >>>>>Neil > > > > > > > >>>>> > > > > > > > >>>>>>Zhihong (John) > > > > > > > >>>>>> > > > > > > > >>>>Even two minutes is a very long time to compile, IMHO. > > > > > > > >>>>The whole of DPDK doesn't take that long to compile > > > > > > > >>>>right now, and that's with a couple of huge header files > > > > > > > >>>>with routing tables in it. Any chance you could cut > > > > > > > >>>>compile time down to a few seconds while still > > > > > > having reasonable tests? > > > > > > > >>>>Also, when there is AVX2 present on the system, what is > > > > > > > >>>>the compile time like for that code? > > > > > > > >>>> > > > > > > > >>>> /Bruce > > > > > > > >>>Neil, Bruce, > > > > > > > >>> > > > > > > > >>>Some data first. > > > > > > > >>> > > > > > > > >>>Sandy Bridge without AVX2: > > > > > > > >>>1. original w/ 10 constant memcpy: 2'25" > > > > > > > >>>2. patch w/ 12 constant memcpy: 2'41" > > > > > > > >>>3. patch w/ 63 constant memcpy: 9'41" > > > > > > > >>> > > > > > > > >>>Haswell with AVX2: > > > > > > > >>>1. original w/ 10 constant memcpy: 1'57" > > > > > > > >>>2. patch w/ 12 constant memcpy: 1'56" > > > > > > > >>>3. patch w/ 63 constant memcpy: 3'16" > > > > > > > >>> > > > > > > > >>>Also, to address Bruce's question, we have to reduce test > > > > > > > >>>case to cut > > > > > > down compile time. Because we use: > > > > > > > >>>1. intrinsics instead of assembly for better flexibility > > > > > > > >>>and can utilize more compiler optimization 2. complex > > > > > > > >>>function body for better performance 3. inlining This incr= eases > compile time. > > > > > > > >>>But I think it'd be okay to do that as long as we can > > > > > > > >>>select a fair set of > > > > > > test points. > > > > > > > >>> > > > > > > > >>>It'd be great if you could give some suggestion, say, 12 p= oints. > > > > > > > >>> > > > > > > > >>>Zhihong (John) > > > > > > > >>> > > > > > > > >>> > > > > > > > >>While I agree in the general case these long compilation > > > > > > > >>times is painful for the users, having a factor of 2-8x in > > > > > > > >>memcpy operations is quite an improvement, specially in > > > > > > > >>DPDK applications which need to deal > > > > > > > >>(unfortunately) heavily on them -- e.g. IP fragmentation > > > > > > > >>and > > > > reassembly. > > > > > > > >> > > > > > > > >>Why not having a fast compilation by default, and a > > > > > > > >>tunable config flag to enable a highly optimized version of > rte_memcpy (e.g. > > > > > > RTE_EAL_OPT_MEMCPY)? > > > > > > > >> > > > > > > > >>Marc > > > > > > > >> > > > > > > > >Out of interest, are these 2-8x improvements something you > > > > > > > >have benchmarked in these app scenarios? [i.e. not just in > > > > > > > >micro- > > > > benchmarks]. > > > > > > > > > > > > > > How much that micro-speedup will end up affecting the > > > > > > > performance of the entire application is something I cannot > > > > > > > say, so I agree that we should probably have some additional > > > > > > > benchmarks before deciding that pays off maintaining 2 versio= ns > of rte_memcpy. > > > > > > > > > > > > > > There are however a bunch of possible DPDK applications that > > > > > > > could potentially benefit; IP fragmentation, tunneling and > > > > > > > specialized DPI applications, among others, since they > > > > > > > involve a reasonable amount of memcpys per pkt. My point > > > > > > > was, *if* it proves that is enough beneficial, why not having= it > optionally? > > > > > > > > > > > > > > Marc > > > > > > > > > > > > I agree, if it provides the speedups then we need to have it > > > > > > in - and quite possibly on by default, even. > > > > > > > > > > > > /Bruce > > > > > > > > > > Since we're clear now that the long compile time is mainly > > > > > caused by too > > > > many inline function calls, I think it's okay not to do this. > > > > > Would you agree? > > > > > > > > Actually I wonder, if instead of: > > > > > > > > + switch (srcofs) { > > > > + case 0x01: MOVEUNALIGNED_LEFT47(dst, src, n, 0x01); break; > > > > + case 0x02: MOVEUNALIGNED_LEFT47(dst, src, n, 0x02); break; > > > > + case 0x03: MOVEUNALIGNED_LEFT47(dst, src, n, 0x03); break; > > > > + case 0x04: MOVEUNALIGNED_LEFT47(dst, src, n, 0x04); break; > > > > + case 0x05: MOVEUNALIGNED_LEFT47(dst, src, n, 0x05); break; > > > > + case 0x06: MOVEUNALIGNED_LEFT47(dst, src, n, 0x06); break; > > > > + case 0x07: MOVEUNALIGNED_LEFT47(dst, src, n, 0x07); break; > > > > + case 0x08: MOVEUNALIGNED_LEFT47(dst, src, n, 0x08); break; > > > > + case 0x09: MOVEUNALIGNED_LEFT47(dst, src, n, 0x09); break; > > > > + case 0x0A: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0A); break; > > > > + case 0x0B: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0B); break; > > > > + case 0x0C: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0C); break; > > > > + case 0x0D: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0D); break; > > > > + case 0x0E: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0E); break; > > > > + case 0x0F: MOVEUNALIGNED_LEFT47(dst, src, n, 0x0F); break; > > > > + default:; > > > > + } > > > > > > > > We'll just do: > > > > MOVEUNALIGNED_LEFT47(dst, src, n, srcofs); > > > > > > > > That should reduce size of the generated code quite a bit, wouldn't= it? > > > > From other side MOVEUNALIGNED_LEFT47() is pretty big chunk, so > > > > performance difference having offset value in a register vs > > > > immediate value shouldn't be significant. > > > > > > > > Konstantin > > > > > > > > > > > > > > Zhihong (John) > > > > > > Hey Konstantin, > > > > > > We have to use switch here because PALIGNR requires the shift count t= o > be an 8-bit immediate. > > > > Ah ok, then can we move switch inside the for the block of code that us= ing > PALIGNR? > > Or would it be too big performance drop? >=20 > I meant 'inside the MOVEUNALIGNED_LEFT47()' macro. :) I think it's more like a matter of programming taste :) and I agree that it= looks clearer inside the macro. Will add this in the next version. Thanks! Zhihong (John) >=20 > > Konstantin > > > > > > > > Zhihong (John)