From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id DF97A5ABE for ; Fri, 23 Jan 2015 04:26:11 +0100 (CET) Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by orsmga103.jf.intel.com with ESMTP; 22 Jan 2015 19:22:06 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.09,453,1418112000"; d="scan'208";a="655258095" Received: from pgsmsx104.gar.corp.intel.com ([10.221.44.91]) by fmsmga001.fm.intel.com with ESMTP; 22 Jan 2015 19:26:09 -0800 Received: from kmsmsx154.gar.corp.intel.com (172.21.73.14) by PGSMSX104.gar.corp.intel.com (10.221.44.91) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 23 Jan 2015 11:26:06 +0800 Received: from shsmsx151.ccr.corp.intel.com (10.239.6.50) by KMSMSX154.gar.corp.intel.com (172.21.73.14) with Microsoft SMTP Server (TLS) id 14.3.195.1; Fri, 23 Jan 2015 11:26:05 +0800 Received: from shsmsx101.ccr.corp.intel.com ([169.254.1.64]) by SHSMSX151.ccr.corp.intel.com ([169.254.3.92]) with mapi id 14.03.0195.001; Fri, 23 Jan 2015 11:26:04 +0800 From: "Wang, Zhihong" To: Neil Horman , "Ananyev, Konstantin" Thread-Topic: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization Thread-Index: AQHQM+g93pBJjc9asEOmHYpPzg55ApzITFZAgADkUQ+AALooEIAAm6/bgAKF1eA= Date: Fri, 23 Jan 2015 03:26:04 +0000 Message-ID: References: <1421632414-10027-1-git-send-email-zhihong.wang@intel.com> <20150119130221.GB21790@hmsreliant.think-freely.org> <20150120151118.GD18449@hmsreliant.think-freely.org> <20150120161453.GA5316@bricha3-MOBL3> <2601191342CEEE43887BDE71AB977258213DE922@irsmsx105.ger.corp.intel.com> <20150121123801.GB18515@hmsreliant.think-freely.org> In-Reply-To: <20150121123801.GB18515@hmsreliant.think-freely.org> Accept-Language: en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.239.127.40] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Cc: "dev@dpdk.org" Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 23 Jan 2015 03:26:13 -0000 > -----Original Message----- > From: Neil Horman [mailto:nhorman@tuxdriver.com] > Sent: Wednesday, January 21, 2015 8:38 PM > To: Ananyev, Konstantin > Cc: Wang, Zhihong; Richardson, Bruce; dev@dpdk.org > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization >=20 > On Wed, Jan 21, 2015 at 12:02:57PM +0000, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Wang, Zhihong > > > Sent: Wednesday, January 21, 2015 3:44 AM > > > To: Richardson, Bruce; Neil Horman > > > Cc: dev@dpdk.org > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > > > > > > -----Original Message----- > > > > From: Richardson, Bruce > > > > Sent: Wednesday, January 21, 2015 12:15 AM > > > > To: Neil Horman > > > > Cc: Wang, Zhihong; dev@dpdk.org > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > On Tue, Jan 20, 2015 at 10:11:18AM -0500, Neil Horman wrote: > > > > > On Tue, Jan 20, 2015 at 03:01:44AM +0000, Wang, Zhihong wrote: > > > > > > > > > > > > > > > > > > > -----Original Message----- > > > > > > > From: Neil Horman [mailto:nhorman@tuxdriver.com] > > > > > > > Sent: Monday, January 19, 2015 9:02 PM > > > > > > > To: Wang, Zhihong > > > > > > > Cc: dev@dpdk.org > > > > > > > Subject: Re: [dpdk-dev] [PATCH 0/4] DPDK memcpy optimization > > > > > > > > > > > > > > On Mon, Jan 19, 2015 at 09:53:30AM +0800, > > > > > > > zhihong.wang@intel.com > > > > wrote: > > > > > > > > This patch set optimizes memcpy for DPDK for both SSE and > > > > > > > > AVX > > > > platforms. > > > > > > > > It also extends memcpy test coverage with unaligned cases > > > > > > > > and more test > > > > > > > points. > > > > > > > > > > > > > > > > Optimization techniques are summarized below: > > > > > > > > > > > > > > > > 1. Utilize full cache bandwidth > > > > > > > > > > > > > > > > 2. Enforce aligned stores > > > > > > > > > > > > > > > > 3. Apply load address alignment based on architecture > > > > > > > > features > > > > > > > > > > > > > > > > 4. Make load/store address available as early as possible > > > > > > > > > > > > > > > > 5. General optimization techniques like inlining, branch > > > > > > > > reducing, prefetch pattern access > > > > > > > > > > > > > > > > Zhihong Wang (4): > > > > > > > > Disabled VTA for memcpy test in app/test/Makefile > > > > > > > > Removed unnecessary test cases in test_memcpy.c > > > > > > > > Extended test coverage in test_memcpy_perf.c > > > > > > > > Optimized memcpy in arch/x86/rte_memcpy.h for both SSE > and AVX > > > > > > > > platforms > > > > > > > > > > > > > > > > app/test/Makefile | 6 + > > > > > > > > app/test/test_memcpy.c | 52 += - > > > > > > > > app/test/test_memcpy_perf.c | 238 += ++++--- > > > > > > > > .../common/include/arch/x86/rte_memcpy.h | 664 > > > > > > > +++++++++++++++------ > > > > > > > > 4 files changed, 656 insertions(+), 304 deletions(-) > > > > > > > > > > > > > > > > -- > > > > > > > > 1.9.3 > > > > > > > > > > > > > > > > > > > > > > > Are you able to compile this with gcc 4.9.2? The > > > > > > > compilation of test_memcpy_perf is taking forever for me. It > appears hung. > > > > > > > Neil > > > > > > > > > > > > > > > > > > Neil, > > > > > > > > > > > > Thanks for reporting this! > > > > > > It should compile but will take quite some time if the CPU > > > > > > doesn't support > > > > AVX2, the reason is that: > > > > > > 1. The SSE & AVX memcpy implementation is more complicated > > > > > > than > > > > AVX2 > > > > > > version thus the compiler takes more time to compile and optimi= ze > 2. > > > > > > The new test_memcpy_perf.c contains 126 constants memcpy calls > > > > > > for better test case coverage, that's quite a lot > > > > > > > > > > > > I've just tested this patch on an Ivy Bridge machine with GCC 4= .9.2: > > > > > > 1. The whole compile process takes 9'41" with the original > > > > > > test_memcpy_perf.c (63 + 63 =3D 126 constant memcpy calls) 2. I= t > > > > > > takes only 2'41" after I reduce the constant memcpy call > > > > > > number to 12 + 12 =3D 24 > > > > > > > > > > > > I'll reduce memcpy call in the next version of patch. > > > > > > > > > > > ok, thank you. I'm all for optimzation, but I think a compile > > > > > that takes almost > > > > > 10 minutes for a single file is going to generate some raised > > > > > eyebrows when end users start tinkering with it > > > > > > > > > > Neil > > > > > > > > > > > Zhihong (John) > > > > > > > > > > Even two minutes is a very long time to compile, IMHO. The whole > > > > of DPDK doesn't take that long to compile right now, and that's > > > > with a couple of huge header files with routing tables in it. Any > > > > chance you could cut compile time down to a few seconds while still > having reasonable tests? > > > > Also, when there is AVX2 present on the system, what is the > > > > compile time like for that code? > > > > > > > > /Bruce > > > > > > Neil, Bruce, > > > > > > Some data first. > > > > > > Sandy Bridge without AVX2: > > > 1. original w/ 10 constant memcpy: 2'25" > > > 2. patch w/ 12 constant memcpy: 2'41" > > > 3. patch w/ 63 constant memcpy: 9'41" > > > > > > Haswell with AVX2: > > > 1. original w/ 10 constant memcpy: 1'57" > > > 2. patch w/ 12 constant memcpy: 1'56" > > > 3. patch w/ 63 constant memcpy: 3'16" > > > > > > Also, to address Bruce's question, we have to reduce test case to cut > down compile time. Because we use: > > > 1. intrinsics instead of assembly for better flexibility and can > > > utilize more compiler optimization 2. complex function body for > > > better performance 3. inlining This increases compile time. > > > > We use instrincts and inlining in many other places too. > > Why it suddenly became a problem here? > I agree, something just doesnt feel right here. not sure what it is yet,= but I > don't see how a memcpy function can be so complex as to take almost 10 > minutes to compile. Its almost like we're recursively including somethin= g > here and its driving gcc into a huge loop Neil >=20 > > Konstantin > > > > > But I think it'd be okay to do that as long as we can select a fair s= et of test > points. > > > > > > It'd be great if you could give some suggestion, say, 12 points. > > > > > > Zhihong (John) > > > > > > > > > > > > > Konstantin, Bruce, Neil, The reason why it took so long is simply because there're too many function= calls. Just keep in mind that there are (63 + 63) * 4 =3D 504 memcpy calls (inline= ) with constant length, gcc will unroll the memcpy function body and genera= te instructions directly for all of them based on the immediate value. I wrote a small program separately which contains the rte_memcpy.h and a "m= ain" function that calls rte_memcpy 120 * 4 =3D 480 times with constant len= gth, it took 11 minutes to compile. Also, the compile time doesn't grow linearly because 1 group (120) memcpy c= alls took less than 1 minute. So I think to reduce the compile time, we need to reduce the constant test = case, like the original file in dpdk 1.8.0 has only (10 + 10)* 4 calls. To Konstantin, Intrinsics is not a problem, what I meant is that, if we write assembly, gc= c may not have to optimize it, but if we use intrinsics, gcc will treat it = like a piece of C code and do optimization, that may increase compile time. To Bruce, My previous compile time in this thread is measured like this: make clean ;= rm -rf x86_64-native-linuxapp-gcc ; then make with j 1.